Dec 01 09:31:48 crc systemd[1]: Starting Kubernetes Kubelet... Dec 01 09:31:48 crc restorecon[4703]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:48 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 09:31:49 crc restorecon[4703]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 09:31:49 crc restorecon[4703]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 01 09:31:49 crc kubenswrapper[4933]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 09:31:49 crc kubenswrapper[4933]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 01 09:31:49 crc kubenswrapper[4933]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 09:31:49 crc kubenswrapper[4933]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 09:31:49 crc kubenswrapper[4933]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 01 09:31:49 crc kubenswrapper[4933]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.457808 4933 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460323 4933 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460349 4933 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460357 4933 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460363 4933 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460367 4933 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460373 4933 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460377 4933 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460382 4933 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460386 4933 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460391 4933 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460395 4933 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460400 4933 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460404 4933 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460408 4933 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460413 4933 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460417 4933 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460421 4933 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460425 4933 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460430 4933 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460434 4933 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460439 4933 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460443 4933 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460448 4933 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460454 4933 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460459 4933 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460464 4933 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460468 4933 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460474 4933 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460478 4933 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460483 4933 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460488 4933 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460493 4933 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460498 4933 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460503 4933 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460507 4933 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460513 4933 feature_gate.go:330] unrecognized feature gate: Example Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460518 4933 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460523 4933 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460528 4933 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460533 4933 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460537 4933 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460541 4933 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460546 4933 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460551 4933 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460563 4933 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460568 4933 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460573 4933 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460579 4933 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460584 4933 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460588 4933 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460595 4933 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460601 4933 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460606 4933 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460612 4933 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460617 4933 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460622 4933 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460627 4933 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460632 4933 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460637 4933 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460641 4933 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460646 4933 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460650 4933 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460654 4933 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460659 4933 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460673 4933 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460678 4933 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460685 4933 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460691 4933 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460695 4933 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460700 4933 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.460705 4933 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461112 4933 flags.go:64] FLAG: --address="0.0.0.0" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461126 4933 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461141 4933 flags.go:64] FLAG: --anonymous-auth="true" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461148 4933 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461155 4933 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461161 4933 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461168 4933 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461174 4933 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461179 4933 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461184 4933 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461190 4933 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461195 4933 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461201 4933 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461206 4933 flags.go:64] FLAG: --cgroup-root="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461212 4933 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461218 4933 flags.go:64] FLAG: --client-ca-file="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461223 4933 flags.go:64] FLAG: --cloud-config="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461228 4933 flags.go:64] FLAG: --cloud-provider="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461233 4933 flags.go:64] FLAG: --cluster-dns="[]" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461244 4933 flags.go:64] FLAG: --cluster-domain="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461249 4933 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461254 4933 flags.go:64] FLAG: --config-dir="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461259 4933 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461265 4933 flags.go:64] FLAG: --container-log-max-files="5" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461272 4933 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461278 4933 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461284 4933 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461289 4933 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461295 4933 flags.go:64] FLAG: --contention-profiling="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461320 4933 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461326 4933 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461331 4933 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461336 4933 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461343 4933 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461349 4933 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461354 4933 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461359 4933 flags.go:64] FLAG: --enable-load-reader="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461364 4933 flags.go:64] FLAG: --enable-server="true" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461370 4933 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461377 4933 flags.go:64] FLAG: --event-burst="100" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461383 4933 flags.go:64] FLAG: --event-qps="50" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461388 4933 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461394 4933 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461399 4933 flags.go:64] FLAG: --eviction-hard="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461406 4933 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461411 4933 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461417 4933 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461422 4933 flags.go:64] FLAG: --eviction-soft="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461428 4933 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461433 4933 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461438 4933 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461443 4933 flags.go:64] FLAG: --experimental-mounter-path="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461449 4933 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461454 4933 flags.go:64] FLAG: --fail-swap-on="true" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461459 4933 flags.go:64] FLAG: --feature-gates="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461466 4933 flags.go:64] FLAG: --file-check-frequency="20s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461471 4933 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461477 4933 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461483 4933 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461488 4933 flags.go:64] FLAG: --healthz-port="10248" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461494 4933 flags.go:64] FLAG: --help="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461499 4933 flags.go:64] FLAG: --hostname-override="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461504 4933 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461510 4933 flags.go:64] FLAG: --http-check-frequency="20s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461515 4933 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461521 4933 flags.go:64] FLAG: --image-credential-provider-config="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461526 4933 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461532 4933 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461537 4933 flags.go:64] FLAG: --image-service-endpoint="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461542 4933 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461547 4933 flags.go:64] FLAG: --kube-api-burst="100" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461552 4933 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461558 4933 flags.go:64] FLAG: --kube-api-qps="50" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461563 4933 flags.go:64] FLAG: --kube-reserved="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461569 4933 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461575 4933 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461581 4933 flags.go:64] FLAG: --kubelet-cgroups="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461586 4933 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461592 4933 flags.go:64] FLAG: --lock-file="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461598 4933 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461603 4933 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461609 4933 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461617 4933 flags.go:64] FLAG: --log-json-split-stream="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461622 4933 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461629 4933 flags.go:64] FLAG: --log-text-split-stream="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461634 4933 flags.go:64] FLAG: --logging-format="text" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461640 4933 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461646 4933 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461651 4933 flags.go:64] FLAG: --manifest-url="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461656 4933 flags.go:64] FLAG: --manifest-url-header="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461663 4933 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461669 4933 flags.go:64] FLAG: --max-open-files="1000000" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461676 4933 flags.go:64] FLAG: --max-pods="110" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461681 4933 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461686 4933 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461693 4933 flags.go:64] FLAG: --memory-manager-policy="None" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461698 4933 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461703 4933 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461709 4933 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461714 4933 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461729 4933 flags.go:64] FLAG: --node-status-max-images="50" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461735 4933 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461741 4933 flags.go:64] FLAG: --oom-score-adj="-999" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461746 4933 flags.go:64] FLAG: --pod-cidr="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461751 4933 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461759 4933 flags.go:64] FLAG: --pod-manifest-path="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461764 4933 flags.go:64] FLAG: --pod-max-pids="-1" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461770 4933 flags.go:64] FLAG: --pods-per-core="0" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461775 4933 flags.go:64] FLAG: --port="10250" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461781 4933 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461786 4933 flags.go:64] FLAG: --provider-id="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461798 4933 flags.go:64] FLAG: --qos-reserved="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461805 4933 flags.go:64] FLAG: --read-only-port="10255" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461810 4933 flags.go:64] FLAG: --register-node="true" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461818 4933 flags.go:64] FLAG: --register-schedulable="true" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461823 4933 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461833 4933 flags.go:64] FLAG: --registry-burst="10" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461838 4933 flags.go:64] FLAG: --registry-qps="5" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461844 4933 flags.go:64] FLAG: --reserved-cpus="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461849 4933 flags.go:64] FLAG: --reserved-memory="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461855 4933 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461860 4933 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461865 4933 flags.go:64] FLAG: --rotate-certificates="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461871 4933 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461876 4933 flags.go:64] FLAG: --runonce="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461881 4933 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461887 4933 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461892 4933 flags.go:64] FLAG: --seccomp-default="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461898 4933 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461903 4933 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461909 4933 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461914 4933 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461920 4933 flags.go:64] FLAG: --storage-driver-password="root" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461925 4933 flags.go:64] FLAG: --storage-driver-secure="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461931 4933 flags.go:64] FLAG: --storage-driver-table="stats" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461936 4933 flags.go:64] FLAG: --storage-driver-user="root" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461941 4933 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461948 4933 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461954 4933 flags.go:64] FLAG: --system-cgroups="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461958 4933 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461967 4933 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461973 4933 flags.go:64] FLAG: --tls-cert-file="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461978 4933 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461992 4933 flags.go:64] FLAG: --tls-min-version="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.461998 4933 flags.go:64] FLAG: --tls-private-key-file="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.462002 4933 flags.go:64] FLAG: --topology-manager-policy="none" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.462008 4933 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.462014 4933 flags.go:64] FLAG: --topology-manager-scope="container" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.462019 4933 flags.go:64] FLAG: --v="2" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.462026 4933 flags.go:64] FLAG: --version="false" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.462033 4933 flags.go:64] FLAG: --vmodule="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.462039 4933 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.462044 4933 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462191 4933 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462200 4933 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462205 4933 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462210 4933 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462216 4933 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462220 4933 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462225 4933 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462229 4933 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462235 4933 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462239 4933 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462244 4933 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462248 4933 feature_gate.go:330] unrecognized feature gate: Example Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462253 4933 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462257 4933 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462262 4933 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462266 4933 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462271 4933 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462276 4933 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462280 4933 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462286 4933 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462293 4933 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462297 4933 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462320 4933 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462325 4933 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462330 4933 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462334 4933 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462338 4933 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462343 4933 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462347 4933 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462352 4933 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462356 4933 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462360 4933 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462366 4933 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462370 4933 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462374 4933 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462379 4933 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462383 4933 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462387 4933 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462392 4933 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462396 4933 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462402 4933 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462407 4933 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462412 4933 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462417 4933 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462422 4933 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462427 4933 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462432 4933 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462436 4933 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462441 4933 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462446 4933 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462450 4933 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462454 4933 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462459 4933 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462464 4933 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462471 4933 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462475 4933 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462481 4933 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462485 4933 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462490 4933 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462495 4933 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462499 4933 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462504 4933 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462510 4933 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462515 4933 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462521 4933 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462526 4933 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462530 4933 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462535 4933 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462540 4933 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462545 4933 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.462551 4933 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.462567 4933 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.473634 4933 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.473668 4933 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473821 4933 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473832 4933 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473838 4933 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473843 4933 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473848 4933 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473853 4933 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473858 4933 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473867 4933 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473872 4933 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473877 4933 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473884 4933 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473892 4933 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473898 4933 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473904 4933 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473909 4933 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473913 4933 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473918 4933 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473922 4933 feature_gate.go:330] unrecognized feature gate: Example Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473926 4933 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473936 4933 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473940 4933 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473945 4933 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473950 4933 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473954 4933 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473959 4933 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473964 4933 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473968 4933 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473972 4933 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473977 4933 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473981 4933 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473986 4933 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473995 4933 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.473999 4933 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474006 4933 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474011 4933 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474017 4933 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474023 4933 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474028 4933 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474034 4933 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474040 4933 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474046 4933 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474051 4933 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474055 4933 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474064 4933 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474068 4933 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474072 4933 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474078 4933 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474082 4933 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474087 4933 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474092 4933 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474096 4933 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474100 4933 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474105 4933 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474109 4933 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474114 4933 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474118 4933 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474129 4933 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474133 4933 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474138 4933 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474142 4933 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474147 4933 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474151 4933 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474156 4933 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474162 4933 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474166 4933 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474170 4933 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474175 4933 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474181 4933 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474187 4933 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474196 4933 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474214 4933 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.474225 4933 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.474986 4933 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475013 4933 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475026 4933 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475038 4933 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475049 4933 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475060 4933 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475070 4933 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475079 4933 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475088 4933 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475098 4933 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475108 4933 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475117 4933 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475128 4933 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475137 4933 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475147 4933 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475157 4933 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475165 4933 feature_gate.go:330] unrecognized feature gate: Example Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475175 4933 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475184 4933 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475193 4933 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475201 4933 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475209 4933 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475220 4933 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475230 4933 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475240 4933 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475249 4933 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475257 4933 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475265 4933 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475273 4933 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475284 4933 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475293 4933 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475326 4933 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475334 4933 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475344 4933 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475380 4933 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475388 4933 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475397 4933 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475405 4933 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475413 4933 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475421 4933 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475429 4933 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475436 4933 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475444 4933 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475452 4933 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475460 4933 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475468 4933 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475476 4933 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475483 4933 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475491 4933 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475498 4933 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475506 4933 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475514 4933 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475521 4933 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475529 4933 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475536 4933 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475544 4933 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475552 4933 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475559 4933 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475567 4933 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475575 4933 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475583 4933 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475591 4933 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475598 4933 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475606 4933 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475614 4933 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475623 4933 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475632 4933 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475640 4933 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475648 4933 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475656 4933 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.475664 4933 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.475677 4933 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.475912 4933 server.go:940] "Client rotation is on, will bootstrap in background" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.480177 4933 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.480337 4933 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.481118 4933 server.go:997] "Starting client certificate rotation" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.481162 4933 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.481569 4933 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-20 10:25:00.071678928 +0000 UTC Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.481671 4933 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 456h53m10.590011633s for next certificate rotation Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.489807 4933 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.492425 4933 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.500816 4933 log.go:25] "Validated CRI v1 runtime API" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.516925 4933 log.go:25] "Validated CRI v1 image API" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.519057 4933 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.522013 4933 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-01-09-27-10-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.522051 4933 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.537661 4933 manager.go:217] Machine: {Timestamp:2025-12-01 09:31:49.536488789 +0000 UTC m=+0.178212424 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:8391db47-1ebd-4bbe-b230-559ad9e10347 BootID:b561dab6-afeb-4be9-867b-b25a2a946b2a Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:62:ee:b1 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:62:ee:b1 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:77:74:8d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:8d:5c:61 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d2:5d:22 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2e:7d:02 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:9a:44:02:97:a6:27 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:42:bc:f9:4e:7e:5f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.537948 4933 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.538198 4933 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.538721 4933 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.538898 4933 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.538931 4933 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.539133 4933 topology_manager.go:138] "Creating topology manager with none policy" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.539143 4933 container_manager_linux.go:303] "Creating device plugin manager" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.539331 4933 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.539355 4933 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.539614 4933 state_mem.go:36] "Initialized new in-memory state store" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.539692 4933 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.540277 4933 kubelet.go:418] "Attempting to sync node with API server" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.540295 4933 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.540334 4933 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.540348 4933 kubelet.go:324] "Adding apiserver pod source" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.540360 4933 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.541836 4933 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.542143 4933 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.542876 4933 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.543093 4933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.543106 4933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Dec 01 09:31:49 crc kubenswrapper[4933]: E1201 09:31:49.543183 4933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Dec 01 09:31:49 crc kubenswrapper[4933]: E1201 09:31:49.543201 4933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.543457 4933 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.543485 4933 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.543495 4933 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.543504 4933 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.543520 4933 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.543527 4933 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.543534 4933 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.543546 4933 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.543553 4933 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.543560 4933 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.543570 4933 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.543577 4933 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.543738 4933 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.544248 4933 server.go:1280] "Started kubelet" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.544338 4933 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.545197 4933 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.545363 4933 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.545873 4933 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 01 09:31:49 crc systemd[1]: Started Kubernetes Kubelet. Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.546586 4933 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.546612 4933 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.546889 4933 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:11:11.118133246 +0000 UTC Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.546991 4933 server.go:460] "Adding debug handlers to kubelet server" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.547291 4933 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.547337 4933 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 01 09:31:49 crc kubenswrapper[4933]: E1201 09:31:49.547355 4933 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.547436 4933 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.554199 4933 factory.go:55] Registering systemd factory Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.554238 4933 factory.go:221] Registration of the systemd container factory successfully Dec 01 09:31:49 crc kubenswrapper[4933]: E1201 09:31:49.553812 4933 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.142:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d0d88a59b22ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 09:31:49.544223405 +0000 UTC m=+0.185947020,LastTimestamp:2025-12-01 09:31:49.544223405 +0000 UTC m=+0.185947020,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.554815 4933 factory.go:153] Registering CRI-O factory Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.554899 4933 factory.go:221] Registration of the crio container factory successfully Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.555037 4933 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.555110 4933 factory.go:103] Registering Raw factory Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.555115 4933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Dec 01 09:31:49 crc kubenswrapper[4933]: E1201 09:31:49.555189 4933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="200ms" Dec 01 09:31:49 crc kubenswrapper[4933]: E1201 09:31:49.555209 4933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.555163 4933 manager.go:1196] Started watching for new ooms in manager Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.556013 4933 manager.go:319] Starting recovery of all containers Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.564654 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.564737 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.564751 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.564763 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.564774 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.564788 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.564799 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.564811 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.564827 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.564838 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.564851 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.564862 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.565877 4933 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566266 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566285 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566298 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566334 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566346 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566357 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566372 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566383 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566397 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566408 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566419 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566432 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566448 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566464 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566481 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566531 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566543 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566556 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566568 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566608 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566619 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566631 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566644 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566657 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566668 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566680 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566693 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566704 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566719 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566732 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566744 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566759 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566773 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566785 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566799 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566815 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566829 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566841 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566853 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566866 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566889 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566905 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566918 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566931 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566944 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566960 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566974 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.566987 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567000 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567013 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567025 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567041 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567059 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567070 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567083 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567096 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567108 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567120 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567132 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567145 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567157 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567168 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567181 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567195 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567210 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567224 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567237 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567251 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567265 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567279 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567294 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567327 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567340 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567355 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567369 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567380 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567396 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567411 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567423 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567437 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567449 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567462 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567476 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567490 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567516 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567531 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567546 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567558 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567573 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567586 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567600 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567614 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567701 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567718 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567732 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567761 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567775 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567789 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567804 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567818 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567831 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567845 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567856 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567869 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567882 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567893 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567905 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567916 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567928 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567939 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567951 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567961 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567972 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567981 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.567993 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568005 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568021 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568033 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568045 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568058 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568070 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568085 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568096 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568109 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568122 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568178 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568229 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568240 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568250 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568261 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568271 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568281 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.568291 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570412 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570609 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570626 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570638 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570649 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570662 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570673 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570685 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570697 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570713 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570725 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570737 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570748 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570759 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570772 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570785 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570796 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570807 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570820 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570831 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570881 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570894 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570906 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570918 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570932 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570945 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570961 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570972 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.570985 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571000 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571016 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571030 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571045 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571058 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571074 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571086 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571099 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571114 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571128 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571141 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571154 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571170 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571183 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571195 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571208 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571224 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571238 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571250 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571262 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571283 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571298 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571328 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571341 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571355 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571368 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571381 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571396 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571411 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571424 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571437 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571451 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571463 4933 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571476 4933 reconstruct.go:97] "Volume reconstruction finished" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.571485 4933 reconciler.go:26] "Reconciler: start to sync state" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.573507 4933 manager.go:324] Recovery completed Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.583390 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.585258 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.585288 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.585315 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.586429 4933 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.586532 4933 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.586673 4933 state_mem.go:36] "Initialized new in-memory state store" Dec 01 09:31:49 crc kubenswrapper[4933]: E1201 09:31:49.647653 4933 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.662296 4933 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.664742 4933 policy_none.go:49] "None policy: Start" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.665846 4933 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.665928 4933 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.666239 4933 kubelet.go:2335] "Starting kubelet main sync loop" Dec 01 09:31:49 crc kubenswrapper[4933]: E1201 09:31:49.666346 4933 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.667149 4933 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.667188 4933 state_mem.go:35] "Initializing new in-memory state store" Dec 01 09:31:49 crc kubenswrapper[4933]: W1201 09:31:49.667840 4933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Dec 01 09:31:49 crc kubenswrapper[4933]: E1201 09:31:49.667969 4933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.714379 4933 manager.go:334] "Starting Device Plugin manager" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.714432 4933 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.714447 4933 server.go:79] "Starting device plugin registration server" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.714867 4933 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.714889 4933 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.715104 4933 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.715377 4933 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.715402 4933 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 01 09:31:49 crc kubenswrapper[4933]: E1201 09:31:49.722538 4933 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 01 09:31:49 crc kubenswrapper[4933]: E1201 09:31:49.756951 4933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="400ms" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.767093 4933 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.767259 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.768664 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.768712 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.768722 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.768889 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.769125 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.769190 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.769779 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.769814 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.769825 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.769960 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.770158 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.770244 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.770264 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.770286 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.770298 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.770494 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.770515 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.770523 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.770639 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.770764 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.770815 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.771620 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.771643 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.771652 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.771656 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.771691 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.771667 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.771843 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.771934 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.771944 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.771957 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.771979 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.771877 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.772880 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.772936 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.772960 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.773091 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.773130 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.773141 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.773446 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.773505 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.774266 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.774297 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.774332 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.815730 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.816731 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.816770 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.816781 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.816809 4933 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 09:31:49 crc kubenswrapper[4933]: E1201 09:31:49.817248 4933 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.142:6443: connect: connection refused" node="crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.873422 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.873594 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.873626 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.873752 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.873799 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.873867 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.873894 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.873955 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.873981 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.874026 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.874048 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.874102 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.874163 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.874198 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.874224 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975347 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975407 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975432 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975453 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975475 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975499 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975519 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975541 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975543 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975562 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975637 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975642 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975660 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975674 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975685 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975687 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975701 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975729 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975750 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975763 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975783 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975799 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975833 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975857 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975883 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975907 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975925 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975948 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975611 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 09:31:49 crc kubenswrapper[4933]: I1201 09:31:49.975971 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.017743 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.019018 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.019095 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.019113 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.019147 4933 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 09:31:50 crc kubenswrapper[4933]: E1201 09:31:50.020112 4933 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.142:6443: connect: connection refused" node="crc" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.090708 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.095663 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.112273 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:50 crc kubenswrapper[4933]: W1201 09:31:50.119850 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-3301b7d56db9810e51db93c7ce74fa578afbedcd2afe7f936b6b893c5d43acde WatchSource:0}: Error finding container 3301b7d56db9810e51db93c7ce74fa578afbedcd2afe7f936b6b893c5d43acde: Status 404 returned error can't find the container with id 3301b7d56db9810e51db93c7ce74fa578afbedcd2afe7f936b6b893c5d43acde Dec 01 09:31:50 crc kubenswrapper[4933]: W1201 09:31:50.124163 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-d177f813dc561e12102acf6610c382eb35e6d2aab6362fa19bbb6a899294677b WatchSource:0}: Error finding container d177f813dc561e12102acf6610c382eb35e6d2aab6362fa19bbb6a899294677b: Status 404 returned error can't find the container with id d177f813dc561e12102acf6610c382eb35e6d2aab6362fa19bbb6a899294677b Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.130082 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 01 09:31:50 crc kubenswrapper[4933]: W1201 09:31:50.138359 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-764631f2e837785094c0427b2895a75909befde8bac0fd831bf259924111e62d WatchSource:0}: Error finding container 764631f2e837785094c0427b2895a75909befde8bac0fd831bf259924111e62d: Status 404 returned error can't find the container with id 764631f2e837785094c0427b2895a75909befde8bac0fd831bf259924111e62d Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.140281 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:31:50 crc kubenswrapper[4933]: E1201 09:31:50.158249 4933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="800ms" Dec 01 09:31:50 crc kubenswrapper[4933]: W1201 09:31:50.166450 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-b6859f7ffecdb1b28781c9ca7558764222ad3e8183d5d205e46883204cc91b41 WatchSource:0}: Error finding container b6859f7ffecdb1b28781c9ca7558764222ad3e8183d5d205e46883204cc91b41: Status 404 returned error can't find the container with id b6859f7ffecdb1b28781c9ca7558764222ad3e8183d5d205e46883204cc91b41 Dec 01 09:31:50 crc kubenswrapper[4933]: W1201 09:31:50.169232 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-520b91a9392a2242438eb8a0070f0ebeb3e27f5aabc33e9768dfa62117f84725 WatchSource:0}: Error finding container 520b91a9392a2242438eb8a0070f0ebeb3e27f5aabc33e9768dfa62117f84725: Status 404 returned error can't find the container with id 520b91a9392a2242438eb8a0070f0ebeb3e27f5aabc33e9768dfa62117f84725 Dec 01 09:31:50 crc kubenswrapper[4933]: W1201 09:31:50.393017 4933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Dec 01 09:31:50 crc kubenswrapper[4933]: E1201 09:31:50.393407 4933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.420722 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.422609 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.422639 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.422648 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.422669 4933 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 09:31:50 crc kubenswrapper[4933]: E1201 09:31:50.422982 4933 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.142:6443: connect: connection refused" node="crc" Dec 01 09:31:50 crc kubenswrapper[4933]: W1201 09:31:50.539953 4933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Dec 01 09:31:50 crc kubenswrapper[4933]: E1201 09:31:50.540074 4933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.545662 4933 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.548048 4933 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:15:59.100143429 +0000 UTC Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.548092 4933 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 416h44m8.552055339s for next certificate rotation Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.671435 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66"} Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.671549 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b6859f7ffecdb1b28781c9ca7558764222ad3e8183d5d205e46883204cc91b41"} Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.673127 4933 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7eb3dd4183e3b84376101c7a0efbac3df96d9693934a5778bca7ff08e7554b42" exitCode=0 Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.673201 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7eb3dd4183e3b84376101c7a0efbac3df96d9693934a5778bca7ff08e7554b42"} Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.673228 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"520b91a9392a2242438eb8a0070f0ebeb3e27f5aabc33e9768dfa62117f84725"} Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.673339 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.674039 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.674065 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.674076 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.674755 4933 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2" exitCode=0 Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.674817 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2"} Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.674849 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"764631f2e837785094c0427b2895a75909befde8bac0fd831bf259924111e62d"} Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.674912 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.676552 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.676584 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.676596 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.682942 4933 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa" exitCode=0 Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.683075 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa"} Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.683156 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3301b7d56db9810e51db93c7ce74fa578afbedcd2afe7f936b6b893c5d43acde"} Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.683324 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.683978 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.684392 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.684422 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.684433 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.684860 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.684909 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.684925 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.685903 4933 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="97b86f15566c5afde426670165750e324859e27846f38fa96071c4e81c1851af" exitCode=0 Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.685955 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"97b86f15566c5afde426670165750e324859e27846f38fa96071c4e81c1851af"} Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.685990 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d177f813dc561e12102acf6610c382eb35e6d2aab6362fa19bbb6a899294677b"} Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.686086 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.686838 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.686869 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:50 crc kubenswrapper[4933]: I1201 09:31:50.686881 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:50 crc kubenswrapper[4933]: W1201 09:31:50.716897 4933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Dec 01 09:31:50 crc kubenswrapper[4933]: E1201 09:31:50.716991 4933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Dec 01 09:31:50 crc kubenswrapper[4933]: E1201 09:31:50.798839 4933 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.142:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d0d88a59b22ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 09:31:49.544223405 +0000 UTC m=+0.185947020,LastTimestamp:2025-12-01 09:31:49.544223405 +0000 UTC m=+0.185947020,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 09:31:50 crc kubenswrapper[4933]: E1201 09:31:50.958929 4933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="1.6s" Dec 01 09:31:51 crc kubenswrapper[4933]: W1201 09:31:51.025715 4933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Dec 01 09:31:51 crc kubenswrapper[4933]: E1201 09:31:51.025789 4933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.223461 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.227405 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.227466 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.227478 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.227526 4933 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.692986 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1"} Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.693087 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0"} Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.693100 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a"} Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.693212 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.695904 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.695975 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.695990 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.697365 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2b4034d688e26719a662808aa5c0756a8cff2b474424f6aff2987cbbf181f9e0"} Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.697486 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.698664 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.698694 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.698704 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.700205 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58"} Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.700231 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223"} Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.700242 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e"} Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.700298 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.700889 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.700907 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.700915 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.707286 4933 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b52e070bfc462a9f38c7a0ace6b75c51d491b514615d85ca57ca9a5485a653c9" exitCode=0 Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.707364 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b52e070bfc462a9f38c7a0ace6b75c51d491b514615d85ca57ca9a5485a653c9"} Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.707458 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.708022 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.708048 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.708058 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.712349 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba"} Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.712379 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382"} Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.712390 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901"} Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.712399 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5"} Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.712410 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541"} Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.712483 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.713151 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.713174 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:51 crc kubenswrapper[4933]: I1201 09:31:51.713182 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.446937 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.717700 4933 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c94a4d2d4128f74a0a0ecb00b4af1ed2835760620593fee78ca33f43a58d8623" exitCode=0 Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.717757 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c94a4d2d4128f74a0a0ecb00b4af1ed2835760620593fee78ca33f43a58d8623"} Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.717921 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.717944 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.718670 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.719165 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.719205 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.719215 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.719479 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.719518 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.719529 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.720516 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.720534 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:52 crc kubenswrapper[4933]: I1201 09:31:52.720543 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.438628 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.725519 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3815386b2976c2ce2dcae87a7aae2ddcfa0a53205ef1d81168c015a58b2385c1"} Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.725577 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9539eb1cbec2f844ae9cccd4ac924105f6a11db5e1e03436eb369f3683e3f5d2"} Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.725588 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f985c5d3848b8e8d2b0ad0995a2e5e65ebff87952226a2c74e07f62dd62f41ec"} Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.725599 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2735f4a06b7a5b90a9b73750be04fb2598144d207bc7fcff5487142b5ce7845f"} Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.725611 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ee912d789a5b6c5d2c9c7d8574b1975096969f054f46154f669ded20b6f19bad"} Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.725693 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.725741 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.726869 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.726901 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.726912 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.726992 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.727015 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:53 crc kubenswrapper[4933]: I1201 09:31:53.727033 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.015126 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.015407 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.016798 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.016861 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.016878 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.160280 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.235564 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.240542 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.728265 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.728727 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.728909 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.729810 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.729882 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.729907 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.730120 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.730163 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.730179 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.858742 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.858984 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.860285 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.860371 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:54 crc kubenswrapper[4933]: I1201 09:31:54.860390 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:55 crc kubenswrapper[4933]: I1201 09:31:55.391297 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:31:55 crc kubenswrapper[4933]: I1201 09:31:55.730459 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:55 crc kubenswrapper[4933]: I1201 09:31:55.731701 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:55 crc kubenswrapper[4933]: I1201 09:31:55.731772 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:55 crc kubenswrapper[4933]: I1201 09:31:55.731788 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:56 crc kubenswrapper[4933]: I1201 09:31:56.732514 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:56 crc kubenswrapper[4933]: I1201 09:31:56.733878 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:56 crc kubenswrapper[4933]: I1201 09:31:56.733949 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:56 crc kubenswrapper[4933]: I1201 09:31:56.733971 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:57 crc kubenswrapper[4933]: I1201 09:31:57.599699 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 09:31:57 crc kubenswrapper[4933]: I1201 09:31:57.599983 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:57 crc kubenswrapper[4933]: I1201 09:31:57.601575 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:57 crc kubenswrapper[4933]: I1201 09:31:57.601635 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:57 crc kubenswrapper[4933]: I1201 09:31:57.601658 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:57 crc kubenswrapper[4933]: I1201 09:31:57.696575 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 01 09:31:57 crc kubenswrapper[4933]: I1201 09:31:57.696791 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:31:57 crc kubenswrapper[4933]: I1201 09:31:57.698063 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:31:57 crc kubenswrapper[4933]: I1201 09:31:57.698108 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:31:57 crc kubenswrapper[4933]: I1201 09:31:57.698120 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:31:58 crc kubenswrapper[4933]: I1201 09:31:58.392453 4933 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:31:58 crc kubenswrapper[4933]: I1201 09:31:58.392608 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:31:59 crc kubenswrapper[4933]: E1201 09:31:59.722844 4933 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 01 09:32:00 crc kubenswrapper[4933]: I1201 09:32:00.226037 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:32:00 crc kubenswrapper[4933]: I1201 09:32:00.226283 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:32:00 crc kubenswrapper[4933]: I1201 09:32:00.227927 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:00 crc kubenswrapper[4933]: I1201 09:32:00.227979 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:00 crc kubenswrapper[4933]: I1201 09:32:00.227991 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:01 crc kubenswrapper[4933]: E1201 09:32:01.229070 4933 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 01 09:32:01 crc kubenswrapper[4933]: I1201 09:32:01.545169 4933 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 01 09:32:02 crc kubenswrapper[4933]: W1201 09:32:02.276067 4933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 01 09:32:02 crc kubenswrapper[4933]: I1201 09:32:02.276166 4933 trace.go:236] Trace[578756582]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 09:31:52.274) (total time: 10001ms): Dec 01 09:32:02 crc kubenswrapper[4933]: Trace[578756582]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:32:02.276) Dec 01 09:32:02 crc kubenswrapper[4933]: Trace[578756582]: [10.001605934s] [10.001605934s] END Dec 01 09:32:02 crc kubenswrapper[4933]: E1201 09:32:02.276185 4933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 01 09:32:02 crc kubenswrapper[4933]: W1201 09:32:02.299787 4933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 01 09:32:02 crc kubenswrapper[4933]: I1201 09:32:02.299931 4933 trace.go:236] Trace[1420327079]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 09:31:52.298) (total time: 10001ms): Dec 01 09:32:02 crc kubenswrapper[4933]: Trace[1420327079]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:32:02.299) Dec 01 09:32:02 crc kubenswrapper[4933]: Trace[1420327079]: [10.001524328s] [10.001524328s] END Dec 01 09:32:02 crc kubenswrapper[4933]: E1201 09:32:02.299964 4933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 01 09:32:02 crc kubenswrapper[4933]: E1201 09:32:02.560826 4933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 01 09:32:02 crc kubenswrapper[4933]: I1201 09:32:02.829897 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:32:02 crc kubenswrapper[4933]: I1201 09:32:02.831151 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:02 crc kubenswrapper[4933]: I1201 09:32:02.831220 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:02 crc kubenswrapper[4933]: I1201 09:32:02.831234 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:02 crc kubenswrapper[4933]: I1201 09:32:02.831261 4933 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 09:32:02 crc kubenswrapper[4933]: I1201 09:32:02.962378 4933 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 01 09:32:02 crc kubenswrapper[4933]: I1201 09:32:02.962439 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 01 09:32:02 crc kubenswrapper[4933]: I1201 09:32:02.966781 4933 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 01 09:32:02 crc kubenswrapper[4933]: I1201 09:32:02.966841 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 01 09:32:03 crc kubenswrapper[4933]: I1201 09:32:03.446282 4933 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]log ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]etcd ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/generic-apiserver-start-informers ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/priority-and-fairness-filter ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/start-apiextensions-informers ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/start-apiextensions-controllers ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/crd-informer-synced ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/start-system-namespaces-controller ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 01 09:32:03 crc kubenswrapper[4933]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 01 09:32:03 crc kubenswrapper[4933]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/bootstrap-controller ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/start-kube-aggregator-informers ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/apiservice-registration-controller ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/apiservice-discovery-controller ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]autoregister-completion ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/apiservice-openapi-controller ok Dec 01 09:32:03 crc kubenswrapper[4933]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 01 09:32:03 crc kubenswrapper[4933]: livez check failed Dec 01 09:32:03 crc kubenswrapper[4933]: I1201 09:32:03.446373 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:32:04 crc kubenswrapper[4933]: I1201 09:32:04.880458 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 01 09:32:04 crc kubenswrapper[4933]: I1201 09:32:04.880625 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:32:04 crc kubenswrapper[4933]: I1201 09:32:04.885434 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:04 crc kubenswrapper[4933]: I1201 09:32:04.885597 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:04 crc kubenswrapper[4933]: I1201 09:32:04.886087 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:04 crc kubenswrapper[4933]: I1201 09:32:04.897225 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 01 09:32:05 crc kubenswrapper[4933]: I1201 09:32:05.754206 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:32:05 crc kubenswrapper[4933]: I1201 09:32:05.755572 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:05 crc kubenswrapper[4933]: I1201 09:32:05.755617 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:05 crc kubenswrapper[4933]: I1201 09:32:05.755628 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.493592 4933 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.552094 4933 apiserver.go:52] "Watching apiserver" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.554991 4933 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.555272 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.555608 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.555796 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.555855 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.555996 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 09:32:07 crc kubenswrapper[4933]: E1201 09:32:07.555900 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.556030 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:07 crc kubenswrapper[4933]: E1201 09:32:07.556071 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.556077 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 09:32:07 crc kubenswrapper[4933]: E1201 09:32:07.556107 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.557110 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.557508 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.557541 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.557834 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.557923 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.557942 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.558383 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.558394 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.559519 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.578192 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.592573 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.606235 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.616058 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.629224 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.641782 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.648511 4933 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.656394 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.668111 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.967952 4933 trace.go:236] Trace[1576944468]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 09:31:54.050) (total time: 13917ms): Dec 01 09:32:07 crc kubenswrapper[4933]: Trace[1576944468]: ---"Objects listed" error: 13917ms (09:32:07.967) Dec 01 09:32:07 crc kubenswrapper[4933]: Trace[1576944468]: [13.917685966s] [13.917685966s] END Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.967986 4933 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.968971 4933 trace.go:236] Trace[1471781908]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 09:31:53.043) (total time: 14925ms): Dec 01 09:32:07 crc kubenswrapper[4933]: Trace[1471781908]: ---"Objects listed" error: 14925ms (09:32:07.968) Dec 01 09:32:07 crc kubenswrapper[4933]: Trace[1471781908]: [14.925354769s] [14.925354769s] END Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.969007 4933 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.969456 4933 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 01 09:32:07 crc kubenswrapper[4933]: E1201 09:32:07.969882 4933 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.992752 4933 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42738->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.992768 4933 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42744->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.992810 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42738->192.168.126.11:17697: read: connection reset by peer" Dec 01 09:32:07 crc kubenswrapper[4933]: I1201 09:32:07.992830 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42744->192.168.126.11:17697: read: connection reset by peer" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.069972 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070026 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070050 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070073 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070092 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070116 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070136 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070156 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070177 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070197 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070216 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070243 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070264 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070323 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070350 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070372 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070394 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070418 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070442 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070469 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070490 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070523 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070542 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070563 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070585 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070606 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070625 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070645 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070668 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070687 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070708 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070730 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070752 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070774 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070799 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070820 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070840 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070873 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070903 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070926 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070947 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070968 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.070989 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071010 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071031 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071052 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071074 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071095 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071118 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071140 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071165 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071187 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071214 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071235 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071257 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071278 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071299 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071343 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071364 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071385 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071406 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071428 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071449 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071471 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071495 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071516 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071536 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071558 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071582 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071602 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071622 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071643 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071664 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071688 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071709 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071730 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071750 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071772 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071792 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071813 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071838 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071862 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071884 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071905 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071928 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071948 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071969 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.071990 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072012 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072033 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072055 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072077 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072098 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072119 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072140 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072163 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072188 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072209 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072230 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072287 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072341 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072364 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072386 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072410 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072432 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072454 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072476 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072498 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072518 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072541 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072563 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072584 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072608 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072630 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072955 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.072989 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073006 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073024 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073041 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073091 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073108 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073124 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073139 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073154 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073170 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073160 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073188 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073204 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073221 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073239 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073256 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073274 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073289 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073319 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073338 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073355 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073356 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073371 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073388 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073408 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073440 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073462 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073482 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073501 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073523 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073544 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073564 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073583 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073602 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073622 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073643 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073646 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073667 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073684 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073700 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073717 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073732 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073753 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073777 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073793 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073799 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073814 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073836 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073848 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073879 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073904 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073961 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073982 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073986 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.073998 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074025 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074030 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074070 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074096 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074120 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074143 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074168 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074192 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074213 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074235 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074259 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074283 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074327 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074350 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074373 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074397 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074420 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074442 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074470 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074494 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074517 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074539 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074630 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074655 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074677 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074701 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074723 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074751 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074776 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074799 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074822 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074845 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074867 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074891 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074944 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074975 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075000 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075025 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075050 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075073 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075100 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075128 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075149 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075176 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075203 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075226 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075251 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075273 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075356 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075373 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075388 4933 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075402 4933 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075417 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075431 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075444 4933 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075457 4933 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075470 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.076930 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074144 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074163 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074264 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074465 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074673 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074796 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.074948 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075131 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075265 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075278 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075411 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075423 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.075752 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.076088 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.076168 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.076221 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.076442 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.076673 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.076739 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.076947 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: E1201 09:32:08.077166 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:32:08.577146009 +0000 UTC m=+19.218869804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.125148 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.125426 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.125620 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.132239 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.134613 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.135104 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.138475 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.138564 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.138785 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.139056 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.139014 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.077220 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.077268 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.077349 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.077492 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.078985 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.079080 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.079084 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.139040 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.079211 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.079356 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.079411 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.079443 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.079603 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.079867 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.079913 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.080197 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.080206 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.080292 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.080449 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.080577 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.080634 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.080638 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.080648 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.080788 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.080847 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.080989 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.081011 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.081036 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.081054 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.081065 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.081274 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.081425 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.081503 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.081683 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.081892 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.082483 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.083596 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.083710 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.083848 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.083917 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.084087 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.084480 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.086032 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.086610 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.086755 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.087049 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.087224 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.087624 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.087929 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.090556 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.092781 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.093026 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.093301 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.093558 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.093799 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.093811 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.093836 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.093832 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.097434 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.097739 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.097887 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.098129 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.102378 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.102916 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.103348 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.103595 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.103913 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.104134 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.104283 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.104455 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.104610 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.104678 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.104963 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.105144 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.105354 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.105512 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.105681 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.105837 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.105875 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.105928 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.106014 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.106075 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.106084 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.106170 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.106210 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.106245 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.106256 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.106419 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.106561 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.106704 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.106890 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.107120 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.107269 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.107416 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.107667 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.108151 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.108510 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.108532 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.108734 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.108800 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.110896 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.111491 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.121446 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.143704 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.144081 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.144655 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.144731 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.144751 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.146541 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.146641 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.146677 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.146694 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.147218 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.147397 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.147432 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.147450 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.147493 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.147584 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.147645 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.077202 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.147706 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.147829 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.147833 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.147853 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.147924 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.147992 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.148050 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.148214 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.148281 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.148373 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.148500 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.148688 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.148633 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.148818 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.148834 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.148869 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.149094 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.149133 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.149192 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: E1201 09:32:08.149268 4933 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:08 crc kubenswrapper[4933]: E1201 09:32:08.149352 4933 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:08 crc kubenswrapper[4933]: E1201 09:32:08.149370 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:08.649342542 +0000 UTC m=+19.291066167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.149419 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 09:32:08 crc kubenswrapper[4933]: E1201 09:32:08.149454 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:08.649429534 +0000 UTC m=+19.291153149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.149905 4933 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.152070 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.152585 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.152652 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.152685 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.152725 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.152746 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.153232 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.154980 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.155087 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.156251 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.158956 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.160571 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.162819 4933 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.163508 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.163912 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.165292 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.168912 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 09:32:08 crc kubenswrapper[4933]: E1201 09:32:08.170213 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:08 crc kubenswrapper[4933]: E1201 09:32:08.170235 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:08 crc kubenswrapper[4933]: E1201 09:32:08.170249 4933 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:08 crc kubenswrapper[4933]: E1201 09:32:08.170328 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:08.670285219 +0000 UTC m=+19.312008834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.173815 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.174997 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176579 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176619 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176669 4933 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176680 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176690 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176699 4933 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176709 4933 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176720 4933 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176731 4933 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176742 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176752 4933 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176764 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176776 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176789 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176800 4933 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176811 4933 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176819 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176831 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176845 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176858 4933 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176871 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176880 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176889 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176898 4933 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176908 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176918 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176930 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176942 4933 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176954 4933 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176966 4933 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176976 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176987 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.176995 4933 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177003 4933 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177012 4933 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177023 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177034 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177049 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177060 4933 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177072 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177084 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177095 4933 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177105 4933 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177116 4933 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177126 4933 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177138 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177150 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177161 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177173 4933 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177184 4933 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177195 4933 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177208 4933 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177218 4933 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177230 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177228 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177242 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177254 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177265 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177277 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177288 4933 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177299 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177335 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177347 4933 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177358 4933 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177379 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177390 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177403 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177414 4933 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177426 4933 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177438 4933 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177448 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177449 4933 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177484 4933 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177494 4933 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177515 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177524 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177533 4933 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177543 4933 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177552 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177560 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177569 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177579 4933 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177588 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177597 4933 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177605 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177614 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177622 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177631 4933 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177639 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177648 4933 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177657 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177665 4933 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177715 4933 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177725 4933 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177735 4933 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177745 4933 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177755 4933 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177765 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177775 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177786 4933 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177798 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177808 4933 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177822 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177834 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177845 4933 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177855 4933 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177867 4933 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177880 4933 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177893 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177904 4933 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177913 4933 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177922 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177932 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177942 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177961 4933 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177970 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177979 4933 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177987 4933 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177997 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178006 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178016 4933 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178025 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178035 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178044 4933 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178054 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178063 4933 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178071 4933 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178080 4933 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178090 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178100 4933 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178109 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178118 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178127 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178135 4933 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178145 4933 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178154 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178162 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178172 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178181 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178189 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178203 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178215 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178224 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178233 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178241 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178250 4933 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178261 4933 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178271 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178280 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178288 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178297 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178320 4933 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178330 4933 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178340 4933 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178350 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178360 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178369 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178379 4933 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178390 4933 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178398 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178407 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178416 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178426 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178434 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178443 4933 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178451 4933 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178461 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178469 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178478 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178486 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178494 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178503 4933 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178512 4933 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178521 4933 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178530 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178539 4933 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178549 4933 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178557 4933 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178566 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178575 4933 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178584 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178594 4933 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178603 4933 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178611 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178620 4933 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178628 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.178639 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.177295 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.179214 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.182769 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.183179 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.183513 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-nzz88"] Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.184737 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nzz88" Dec 01 09:32:08 crc kubenswrapper[4933]: E1201 09:32:08.185045 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:08 crc kubenswrapper[4933]: E1201 09:32:08.185071 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:08 crc kubenswrapper[4933]: E1201 09:32:08.185083 4933 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:08 crc kubenswrapper[4933]: E1201 09:32:08.185129 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:08.685114365 +0000 UTC m=+19.326837980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.190698 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.190934 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.192277 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.200224 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.208669 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.216977 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.218238 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.226227 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.243726 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.256188 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.267236 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.278177 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.279422 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c24a92ea-5279-4bf2-847f-04981f1c330a-hosts-file\") pod \"node-resolver-nzz88\" (UID: \"c24a92ea-5279-4bf2-847f-04981f1c330a\") " pod="openshift-dns/node-resolver-nzz88" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.279470 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk8cm\" (UniqueName: \"kubernetes.io/projected/c24a92ea-5279-4bf2-847f-04981f1c330a-kube-api-access-tk8cm\") pod \"node-resolver-nzz88\" (UID: \"c24a92ea-5279-4bf2-847f-04981f1c330a\") " pod="openshift-dns/node-resolver-nzz88" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.279500 4933 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.279513 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.290017 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.298555 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.307608 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.319119 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.328482 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.339103 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.348747 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.365729 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.380151 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c24a92ea-5279-4bf2-847f-04981f1c330a-hosts-file\") pod \"node-resolver-nzz88\" (UID: \"c24a92ea-5279-4bf2-847f-04981f1c330a\") " pod="openshift-dns/node-resolver-nzz88" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.380205 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk8cm\" (UniqueName: \"kubernetes.io/projected/c24a92ea-5279-4bf2-847f-04981f1c330a-kube-api-access-tk8cm\") pod \"node-resolver-nzz88\" (UID: \"c24a92ea-5279-4bf2-847f-04981f1c330a\") " pod="openshift-dns/node-resolver-nzz88" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.380380 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c24a92ea-5279-4bf2-847f-04981f1c330a-hosts-file\") pod \"node-resolver-nzz88\" (UID: \"c24a92ea-5279-4bf2-847f-04981f1c330a\") " pod="openshift-dns/node-resolver-nzz88" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.398242 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk8cm\" (UniqueName: \"kubernetes.io/projected/c24a92ea-5279-4bf2-847f-04981f1c330a-kube-api-access-tk8cm\") pod \"node-resolver-nzz88\" (UID: \"c24a92ea-5279-4bf2-847f-04981f1c330a\") " pod="openshift-dns/node-resolver-nzz88" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.443184 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.443863 4933 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.443915 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.446796 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.452444 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.460881 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.469065 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.469432 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.475640 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.478988 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.490344 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:08 crc kubenswrapper[4933]: I1201 09:32:08.490395 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.912103 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.925622 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.926228 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nzz88" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.926336 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.926589 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.926670 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.926738 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.926839 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.926956 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.927058 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.928063 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.928089 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.928102 4933 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.931013 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:32:10.92708675 +0000 UTC m=+21.568810355 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.931217 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:10.931184612 +0000 UTC m=+21.572908227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.931410 4933 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.931566 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:10.931507579 +0000 UTC m=+21.573231194 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.931092 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.956157 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.956194 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.956348 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.956369 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.957746 4933 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.957878 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:10.95784909 +0000 UTC m=+21.599572695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.959027 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.960278 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.960937 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.969019 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.973028 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.973717 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.976671 4933 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:09 crc kubenswrapper[4933]: E1201 09:32:09.976735 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:10.976718475 +0000 UTC m=+21.618442090 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.977378 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.978293 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.980736 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.981683 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.991293 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.991934 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.994061 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.994579 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.994717 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.995496 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.996777 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.997485 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 01 09:32:09 crc kubenswrapper[4933]: I1201 09:32:09.999699 4933 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba" exitCode=255 Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.000501 4933 scope.go:117] "RemoveContainer" containerID="c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.001415 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.001892 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.002757 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.004113 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.005023 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.006336 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.007126 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.010265 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.011006 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.012028 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.013934 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.014620 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.021217 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.021718 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.022466 4933 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.023033 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.025114 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.028081 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.032390 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.035200 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.036959 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.037597 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.038657 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.040180 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.041572 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.042175 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.045260 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.046661 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.048010 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.048628 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.050023 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.050953 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.059762 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.060448 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.061087 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.062133 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.062873 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.063567 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.065820 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.066962 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-k4lcd"] Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.067274 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba"} Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.067950 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zccpd"] Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.068509 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"16201e6e0f0a7b0628a087dacee3c3f7e18035c65ef0b1363ad489f0e5e7f6f2"} Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.068533 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4fncv"] Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.068713 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ftnw9"] Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.068781 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.069066 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.069123 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.074873 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qvh8t"] Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.089207 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.089619 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.089919 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qvh8t" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.099057 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.099288 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.099371 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.099491 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.100015 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.100532 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.100675 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.100711 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.100834 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.100914 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.100977 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.101105 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.101156 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.101221 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.101344 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.101446 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.101547 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.101588 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.101658 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.105156 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.105390 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.105514 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.127222 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.136553 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161669 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-log-socket\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161716 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-cni-netd\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161741 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-run-k8s-cni-cncf-io\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161767 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8zrx\" (UniqueName: \"kubernetes.io/projected/cae5a541-953b-49b6-8dfa-d19cdd133d79-kube-api-access-t8zrx\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161798 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161821 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-ovnkube-script-lib\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161840 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-multus-conf-dir\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161863 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-openvswitch\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161881 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-hostroot\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161902 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cae5a541-953b-49b6-8dfa-d19cdd133d79-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161922 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-system-cni-dir\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161939 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d2befd5-f33d-48b0-9873-bf540dc9895c-host\") pod \"node-ca-qvh8t\" (UID: \"2d2befd5-f33d-48b0-9873-bf540dc9895c\") " pod="openshift-image-registry/node-ca-qvh8t" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161958 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-ovn\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161977 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-multus-daemon-config\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.161998 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-run-multus-certs\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162019 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cae5a541-953b-49b6-8dfa-d19cdd133d79-cnibin\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162040 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-run-netns\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162062 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cae5a541-953b-49b6-8dfa-d19cdd133d79-cni-binary-copy\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162083 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d49bee31-b7e9-4daa-986f-b6f58c663813-ovn-node-metrics-cert\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162101 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2d2befd5-f33d-48b0-9873-bf540dc9895c-serviceca\") pod \"node-ca-qvh8t\" (UID: \"2d2befd5-f33d-48b0-9873-bf540dc9895c\") " pod="openshift-image-registry/node-ca-qvh8t" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162124 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/31deca5a-8ffe-4967-b02f-98a2043ddb23-rootfs\") pod \"machine-config-daemon-k4lcd\" (UID: \"31deca5a-8ffe-4967-b02f-98a2043ddb23\") " pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162147 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-var-lib-cni-multus\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162166 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-var-lib-kubelet\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162186 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-multus-cni-dir\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162208 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31deca5a-8ffe-4967-b02f-98a2043ddb23-proxy-tls\") pod \"machine-config-daemon-k4lcd\" (UID: \"31deca5a-8ffe-4967-b02f-98a2043ddb23\") " pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162227 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-etc-openvswitch\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162247 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-cni-bin\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162266 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-run-ovn-kubernetes\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162285 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-etc-kubernetes\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162559 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-kubelet\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162585 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-ovnkube-config\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162606 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-multus-socket-dir-parent\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162626 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8p8k\" (UniqueName: \"kubernetes.io/projected/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-kube-api-access-w8p8k\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162649 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wthcb\" (UniqueName: \"kubernetes.io/projected/2d2befd5-f33d-48b0-9873-bf540dc9895c-kube-api-access-wthcb\") pod \"node-ca-qvh8t\" (UID: \"2d2befd5-f33d-48b0-9873-bf540dc9895c\") " pod="openshift-image-registry/node-ca-qvh8t" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162672 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cae5a541-953b-49b6-8dfa-d19cdd133d79-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162705 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-slash\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162725 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-run-netns\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162746 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-systemd\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162766 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9968\" (UniqueName: \"kubernetes.io/projected/d49bee31-b7e9-4daa-986f-b6f58c663813-kube-api-access-d9968\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162788 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-cni-binary-copy\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162811 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh2lc\" (UniqueName: \"kubernetes.io/projected/31deca5a-8ffe-4967-b02f-98a2043ddb23-kube-api-access-vh2lc\") pod \"machine-config-daemon-k4lcd\" (UID: \"31deca5a-8ffe-4967-b02f-98a2043ddb23\") " pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162833 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-var-lib-openvswitch\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162853 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-os-release\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162874 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cae5a541-953b-49b6-8dfa-d19cdd133d79-os-release\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162897 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31deca5a-8ffe-4967-b02f-98a2043ddb23-mcd-auth-proxy-config\") pod \"machine-config-daemon-k4lcd\" (UID: \"31deca5a-8ffe-4967-b02f-98a2043ddb23\") " pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162919 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-systemd-units\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162941 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-node-log\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162963 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-env-overrides\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.162990 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-cnibin\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.163012 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-var-lib-cni-bin\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.163034 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cae5a541-953b-49b6-8dfa-d19cdd133d79-system-cni-dir\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.170684 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.199879 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.215645 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.232231 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.251717 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267007 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31deca5a-8ffe-4967-b02f-98a2043ddb23-proxy-tls\") pod \"machine-config-daemon-k4lcd\" (UID: \"31deca5a-8ffe-4967-b02f-98a2043ddb23\") " pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267045 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-etc-openvswitch\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267062 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-cni-bin\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267078 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-multus-cni-dir\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267105 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-kubelet\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267122 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-run-ovn-kubernetes\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267137 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-etc-kubernetes\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267135 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-cni-bin\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267152 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-ovnkube-config\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267155 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-etc-openvswitch\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267189 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-run-ovn-kubernetes\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267168 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-multus-socket-dir-parent\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267219 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-kubelet\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267221 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-multus-socket-dir-parent\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267242 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8p8k\" (UniqueName: \"kubernetes.io/projected/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-kube-api-access-w8p8k\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267256 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-etc-kubernetes\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267268 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-slash\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267293 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-run-netns\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267374 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-systemd\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267394 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9968\" (UniqueName: \"kubernetes.io/projected/d49bee31-b7e9-4daa-986f-b6f58c663813-kube-api-access-d9968\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267818 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-cni-binary-copy\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267838 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-ovnkube-config\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267841 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wthcb\" (UniqueName: \"kubernetes.io/projected/2d2befd5-f33d-48b0-9873-bf540dc9895c-kube-api-access-wthcb\") pod \"node-ca-qvh8t\" (UID: \"2d2befd5-f33d-48b0-9873-bf540dc9895c\") " pod="openshift-image-registry/node-ca-qvh8t" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267876 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cae5a541-953b-49b6-8dfa-d19cdd133d79-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267915 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh2lc\" (UniqueName: \"kubernetes.io/projected/31deca5a-8ffe-4967-b02f-98a2043ddb23-kube-api-access-vh2lc\") pod \"machine-config-daemon-k4lcd\" (UID: \"31deca5a-8ffe-4967-b02f-98a2043ddb23\") " pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267937 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-var-lib-openvswitch\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267956 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31deca5a-8ffe-4967-b02f-98a2043ddb23-mcd-auth-proxy-config\") pod \"machine-config-daemon-k4lcd\" (UID: \"31deca5a-8ffe-4967-b02f-98a2043ddb23\") " pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267973 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-systemd-units\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267989 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-node-log\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268005 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-env-overrides\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268021 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-os-release\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268038 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cae5a541-953b-49b6-8dfa-d19cdd133d79-os-release\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268060 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-cnibin\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268077 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-var-lib-cni-bin\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268095 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cae5a541-953b-49b6-8dfa-d19cdd133d79-system-cni-dir\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268111 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-log-socket\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268126 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-cni-netd\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268145 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-run-k8s-cni-cncf-io\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268162 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8zrx\" (UniqueName: \"kubernetes.io/projected/cae5a541-953b-49b6-8dfa-d19cdd133d79-kube-api-access-t8zrx\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268183 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268202 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-ovnkube-script-lib\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268219 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-multus-conf-dir\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267457 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-run-netns\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268268 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-openvswitch\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267509 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-systemd\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268245 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-openvswitch\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268407 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-hostroot\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268677 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cae5a541-953b-49b6-8dfa-d19cdd133d79-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268678 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-hostroot\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268710 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-node-log\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268724 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-var-lib-cni-bin\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268744 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-multus-cni-dir\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268771 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-log-socket\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268744 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-cnibin\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268753 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cae5a541-953b-49b6-8dfa-d19cdd133d79-system-cni-dir\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268810 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268907 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-os-release\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268908 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-var-lib-openvswitch\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268993 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cae5a541-953b-49b6-8dfa-d19cdd133d79-os-release\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.269035 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-cni-netd\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.269074 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-run-k8s-cni-cncf-io\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.269107 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-systemd-units\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.269143 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-multus-conf-dir\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.269251 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-ovnkube-script-lib\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.269510 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-env-overrides\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.269563 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31deca5a-8ffe-4967-b02f-98a2043ddb23-mcd-auth-proxy-config\") pod \"machine-config-daemon-k4lcd\" (UID: \"31deca5a-8ffe-4967-b02f-98a2043ddb23\") " pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.269729 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-cni-binary-copy\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.267443 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-slash\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.268533 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cae5a541-953b-49b6-8dfa-d19cdd133d79-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.270641 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-system-cni-dir\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.270678 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d2befd5-f33d-48b0-9873-bf540dc9895c-host\") pod \"node-ca-qvh8t\" (UID: \"2d2befd5-f33d-48b0-9873-bf540dc9895c\") " pod="openshift-image-registry/node-ca-qvh8t" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.270694 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.270883 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-system-cni-dir\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.270914 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d2befd5-f33d-48b0-9873-bf540dc9895c-host\") pod \"node-ca-qvh8t\" (UID: \"2d2befd5-f33d-48b0-9873-bf540dc9895c\") " pod="openshift-image-registry/node-ca-qvh8t" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.270705 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-ovn\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271401 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-multus-daemon-config\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271511 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-ovn\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271430 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-run-multus-certs\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271586 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cae5a541-953b-49b6-8dfa-d19cdd133d79-cnibin\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271610 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-run-netns\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271632 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cae5a541-953b-49b6-8dfa-d19cdd133d79-cni-binary-copy\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271640 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-run-multus-certs\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271655 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cae5a541-953b-49b6-8dfa-d19cdd133d79-cnibin\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271656 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d49bee31-b7e9-4daa-986f-b6f58c663813-ovn-node-metrics-cert\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271677 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-run-netns\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271695 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2d2befd5-f33d-48b0-9873-bf540dc9895c-serviceca\") pod \"node-ca-qvh8t\" (UID: \"2d2befd5-f33d-48b0-9873-bf540dc9895c\") " pod="openshift-image-registry/node-ca-qvh8t" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271715 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/31deca5a-8ffe-4967-b02f-98a2043ddb23-rootfs\") pod \"machine-config-daemon-k4lcd\" (UID: \"31deca5a-8ffe-4967-b02f-98a2043ddb23\") " pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271734 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-var-lib-cni-multus\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271751 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-var-lib-kubelet\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.271804 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-var-lib-kubelet\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.272109 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cae5a541-953b-49b6-8dfa-d19cdd133d79-cni-binary-copy\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.272144 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/31deca5a-8ffe-4967-b02f-98a2043ddb23-rootfs\") pod \"machine-config-daemon-k4lcd\" (UID: \"31deca5a-8ffe-4967-b02f-98a2043ddb23\") " pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.272148 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-host-var-lib-cni-multus\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.272675 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2d2befd5-f33d-48b0-9873-bf540dc9895c-serviceca\") pod \"node-ca-qvh8t\" (UID: \"2d2befd5-f33d-48b0-9873-bf540dc9895c\") " pod="openshift-image-registry/node-ca-qvh8t" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.273057 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-multus-daemon-config\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.274004 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cae5a541-953b-49b6-8dfa-d19cdd133d79-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.277562 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31deca5a-8ffe-4967-b02f-98a2043ddb23-proxy-tls\") pod \"machine-config-daemon-k4lcd\" (UID: \"31deca5a-8ffe-4967-b02f-98a2043ddb23\") " pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.295748 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d49bee31-b7e9-4daa-986f-b6f58c663813-ovn-node-metrics-cert\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.299199 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.300220 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wthcb\" (UniqueName: \"kubernetes.io/projected/2d2befd5-f33d-48b0-9873-bf540dc9895c-kube-api-access-wthcb\") pod \"node-ca-qvh8t\" (UID: \"2d2befd5-f33d-48b0-9873-bf540dc9895c\") " pod="openshift-image-registry/node-ca-qvh8t" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.300758 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh2lc\" (UniqueName: \"kubernetes.io/projected/31deca5a-8ffe-4967-b02f-98a2043ddb23-kube-api-access-vh2lc\") pod \"machine-config-daemon-k4lcd\" (UID: \"31deca5a-8ffe-4967-b02f-98a2043ddb23\") " pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.307554 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9968\" (UniqueName: \"kubernetes.io/projected/d49bee31-b7e9-4daa-986f-b6f58c663813-kube-api-access-d9968\") pod \"ovnkube-node-zccpd\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.308756 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.309522 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8zrx\" (UniqueName: \"kubernetes.io/projected/cae5a541-953b-49b6-8dfa-d19cdd133d79-kube-api-access-t8zrx\") pod \"multus-additional-cni-plugins-ftnw9\" (UID: \"cae5a541-953b-49b6-8dfa-d19cdd133d79\") " pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.309880 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8p8k\" (UniqueName: \"kubernetes.io/projected/f0c7b4b8-8e07-4bd4-b811-cdb373873e8a-kube-api-access-w8p8k\") pod \"multus-4fncv\" (UID: \"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\") " pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.321239 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.342663 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.358234 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.371077 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.379465 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.392872 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.407181 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.416484 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.426632 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.427073 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4fncv" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.440173 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: W1201 09:32:10.441623 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0c7b4b8_8e07_4bd4_b811_cdb373873e8a.slice/crio-db7ac1e04534cd321915e69a988325fc10de4d807da820bac8580c86752b0ca8 WatchSource:0}: Error finding container db7ac1e04534cd321915e69a988325fc10de4d807da820bac8580c86752b0ca8: Status 404 returned error can't find the container with id db7ac1e04534cd321915e69a988325fc10de4d807da820bac8580c86752b0ca8 Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.451095 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.451110 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.461912 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.474744 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.478965 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.499611 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.505930 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qvh8t" Dec 01 09:32:10 crc kubenswrapper[4933]: W1201 09:32:10.598995 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd49bee31_b7e9_4daa_986f_b6f58c663813.slice/crio-81a563a9935749eba26db6f6e6876cc28a7d87e9118941b7816dd644cb486c78 WatchSource:0}: Error finding container 81a563a9935749eba26db6f6e6876cc28a7d87e9118941b7816dd644cb486c78: Status 404 returned error can't find the container with id 81a563a9935749eba26db6f6e6876cc28a7d87e9118941b7816dd644cb486c78 Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.977942 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.978047 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.978080 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.978099 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:10 crc kubenswrapper[4933]: I1201 09:32:10.978119 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:10 crc kubenswrapper[4933]: E1201 09:32:10.978149 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:32:12.978122553 +0000 UTC m=+23.619846168 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:32:10 crc kubenswrapper[4933]: E1201 09:32:10.978263 4933 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:10 crc kubenswrapper[4933]: E1201 09:32:10.978278 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:10 crc kubenswrapper[4933]: E1201 09:32:10.978280 4933 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:10 crc kubenswrapper[4933]: E1201 09:32:10.978356 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:10 crc kubenswrapper[4933]: E1201 09:32:10.978382 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:10 crc kubenswrapper[4933]: E1201 09:32:10.978321 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:12.978300079 +0000 UTC m=+23.620023694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:10 crc kubenswrapper[4933]: E1201 09:32:10.978334 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:10 crc kubenswrapper[4933]: E1201 09:32:10.978443 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:12.978417741 +0000 UTC m=+23.620141366 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:10 crc kubenswrapper[4933]: E1201 09:32:10.978443 4933 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:10 crc kubenswrapper[4933]: E1201 09:32:10.978517 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:12.978503443 +0000 UTC m=+23.620227238 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:10 crc kubenswrapper[4933]: E1201 09:32:10.978399 4933 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:10 crc kubenswrapper[4933]: E1201 09:32:10.978621 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:12.978596546 +0000 UTC m=+23.620320161 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.012078 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qvh8t" event={"ID":"2d2befd5-f33d-48b0-9873-bf540dc9895c","Type":"ContainerStarted","Data":"2dcfea28057a9a6ac20f0e9454d88926ccedd77d144aee5cadfd31ada29d8375"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.013838 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.013869 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.013883 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b5c07591b1b64b178243cba72b97ac7f48185cf288a9c3be3561c51e935a8a60"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.015182 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" event={"ID":"cae5a541-953b-49b6-8dfa-d19cdd133d79","Type":"ContainerStarted","Data":"ca7646bdf9278a2dcf5348632cb91617ec0641fa998d33cf1b3c2ab25f1106e1"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.016806 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4fncv" event={"ID":"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a","Type":"ContainerStarted","Data":"8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.016839 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4fncv" event={"ID":"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a","Type":"ContainerStarted","Data":"db7ac1e04534cd321915e69a988325fc10de4d807da820bac8580c86752b0ca8"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.018069 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.018110 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9b6ec06424e2976ad72d184193c363a885332488002a491b476a068d81ff4f35"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.019634 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.021253 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.021649 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.022325 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nzz88" event={"ID":"c24a92ea-5279-4bf2-847f-04981f1c330a","Type":"ContainerStarted","Data":"eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.022362 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nzz88" event={"ID":"c24a92ea-5279-4bf2-847f-04981f1c330a","Type":"ContainerStarted","Data":"5f12df3f6ae02f631ae260b30f89b7a7397238157801bcdd379500ceaaa93f69"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.023360 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.023389 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"0f3bd5c936f454b4077ac319f028f70f2e87bfc79ab85ad1700f6d661e614c89"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.026855 4933 generic.go:334] "Generic (PLEG): container finished" podID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerID="f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9" exitCode=0 Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.026922 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.026957 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerStarted","Data":"81a563a9935749eba26db6f6e6876cc28a7d87e9118941b7816dd644cb486c78"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.052354 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.072480 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.099425 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.115130 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.135609 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.152875 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.170058 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.170078 4933 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.171735 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.171765 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.171773 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.171856 4933 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.179188 4933 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.179535 4933 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.180387 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.180403 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.180410 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.180423 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.180462 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:11Z","lastTransitionTime":"2025-12-01T09:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.193067 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: E1201 09:32:11.194131 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.198047 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.198078 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.198087 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.198099 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.198107 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:11Z","lastTransitionTime":"2025-12-01T09:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.205216 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: E1201 09:32:11.208841 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.212109 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.212128 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.212136 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.212149 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.212157 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:11Z","lastTransitionTime":"2025-12-01T09:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.220411 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: E1201 09:32:11.222404 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.225895 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.225927 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.225938 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.225952 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.225962 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:11Z","lastTransitionTime":"2025-12-01T09:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.236800 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: E1201 09:32:11.238890 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.249002 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.249042 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.249055 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.249071 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.249083 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:11Z","lastTransitionTime":"2025-12-01T09:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.258059 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: E1201 09:32:11.260414 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: E1201 09:32:11.260798 4933 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.265810 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.265840 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.265849 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.265862 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.265871 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:11Z","lastTransitionTime":"2025-12-01T09:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.271629 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.288346 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.304498 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.320556 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.341885 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.355067 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.368334 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.368630 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.368699 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.368777 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.368842 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:11Z","lastTransitionTime":"2025-12-01T09:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.370537 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.384225 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.403382 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.417529 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.428119 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.446132 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.463811 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.472078 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.472118 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.472128 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.472146 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.472157 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:11Z","lastTransitionTime":"2025-12-01T09:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.484124 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.501210 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.514501 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.575690 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.576156 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.576171 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.576194 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.576207 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:11Z","lastTransitionTime":"2025-12-01T09:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.666755 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:11 crc kubenswrapper[4933]: E1201 09:32:11.666970 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.666788 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:11 crc kubenswrapper[4933]: E1201 09:32:11.667077 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.666755 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:11 crc kubenswrapper[4933]: E1201 09:32:11.667151 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.679157 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.679210 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.679221 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.679244 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.679257 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:11Z","lastTransitionTime":"2025-12-01T09:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.788288 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.788632 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.788745 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.788880 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.788992 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:11Z","lastTransitionTime":"2025-12-01T09:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.891935 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.891973 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.891982 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.892001 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.892013 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:11Z","lastTransitionTime":"2025-12-01T09:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.995096 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.995153 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.995163 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.995184 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:11 crc kubenswrapper[4933]: I1201 09:32:11.995199 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:11Z","lastTransitionTime":"2025-12-01T09:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.033274 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa"} Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.040249 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerStarted","Data":"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1"} Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.042445 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qvh8t" event={"ID":"2d2befd5-f33d-48b0-9873-bf540dc9895c","Type":"ContainerStarted","Data":"d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c"} Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.044476 4933 generic.go:334] "Generic (PLEG): container finished" podID="cae5a541-953b-49b6-8dfa-d19cdd133d79" containerID="1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682" exitCode=0 Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.044559 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" event={"ID":"cae5a541-953b-49b6-8dfa-d19cdd133d79","Type":"ContainerDied","Data":"1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682"} Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.059110 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.080079 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.097669 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.098589 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.098710 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.098809 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.098910 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.099049 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:12Z","lastTransitionTime":"2025-12-01T09:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.113870 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.128483 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.145638 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.162222 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.177455 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.195534 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.201906 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.201957 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.201969 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.201988 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.202000 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:12Z","lastTransitionTime":"2025-12-01T09:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.210280 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.232527 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.247801 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.265590 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.282187 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.298131 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.305739 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.305800 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.305812 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.305844 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.305864 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:12Z","lastTransitionTime":"2025-12-01T09:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.311496 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.326103 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.344597 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.361029 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.382504 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.399165 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.408830 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.408885 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.408912 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.408929 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.408943 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:12Z","lastTransitionTime":"2025-12-01T09:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.413622 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.429647 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.446049 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.468876 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.484125 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.499909 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.509633 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.511226 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.511253 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.511266 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.511281 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.511293 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:12Z","lastTransitionTime":"2025-12-01T09:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.613576 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.613622 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.613633 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.613648 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.613658 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:12Z","lastTransitionTime":"2025-12-01T09:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.716295 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.716346 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.716355 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.716369 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.716379 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:12Z","lastTransitionTime":"2025-12-01T09:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.819131 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.819189 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.819202 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.819223 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.819235 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:12Z","lastTransitionTime":"2025-12-01T09:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.921621 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.921662 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.921676 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.921693 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:12 crc kubenswrapper[4933]: I1201 09:32:12.921718 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:12Z","lastTransitionTime":"2025-12-01T09:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.002042 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.002140 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.002163 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.002198 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.002225 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.002282 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:32:17.002254946 +0000 UTC m=+27.643978561 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.002331 4933 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.002415 4933 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.002436 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.002461 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.002464 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.002496 4933 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.002501 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.002514 4933 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.002441 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:17.00241746 +0000 UTC m=+27.644141265 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.002569 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:17.002554533 +0000 UTC m=+27.644278148 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.002585 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:17.002579074 +0000 UTC m=+27.644302689 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.002605 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:17.002599454 +0000 UTC m=+27.644323069 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.024358 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.024392 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.024401 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.024415 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.024424 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:13Z","lastTransitionTime":"2025-12-01T09:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.050988 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerStarted","Data":"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.051033 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerStarted","Data":"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.051046 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerStarted","Data":"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.051056 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerStarted","Data":"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.051067 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerStarted","Data":"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.052220 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.053755 4933 generic.go:334] "Generic (PLEG): container finished" podID="cae5a541-953b-49b6-8dfa-d19cdd133d79" containerID="c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9" exitCode=0 Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.053811 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" event={"ID":"cae5a541-953b-49b6-8dfa-d19cdd133d79","Type":"ContainerDied","Data":"c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.065910 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.088286 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.103422 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.118649 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.127093 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.127140 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.127154 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.127172 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.127183 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:13Z","lastTransitionTime":"2025-12-01T09:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.132865 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.152577 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.167530 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.184274 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.202712 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.224081 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.229746 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.229795 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.229811 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.229828 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.229839 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:13Z","lastTransitionTime":"2025-12-01T09:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.239770 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.256704 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.270033 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.283614 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.296023 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.312848 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.326863 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.332710 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.332763 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.332777 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.332796 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.332809 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:13Z","lastTransitionTime":"2025-12-01T09:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.342274 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.355699 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.368609 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.381608 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.395423 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.411197 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.428271 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.435242 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.435355 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.435366 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.435381 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.435393 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:13Z","lastTransitionTime":"2025-12-01T09:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.443105 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.456784 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.470536 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.483251 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:13Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.539186 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.539239 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.539248 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.539264 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.539280 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:13Z","lastTransitionTime":"2025-12-01T09:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.641860 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.641903 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.641914 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.641929 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.641940 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:13Z","lastTransitionTime":"2025-12-01T09:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.667015 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.667069 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.667196 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.667230 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.667554 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:13 crc kubenswrapper[4933]: E1201 09:32:13.667666 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.744588 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.744633 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.744642 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.744661 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.744672 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:13Z","lastTransitionTime":"2025-12-01T09:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.847752 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.847808 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.847819 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.847837 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.847849 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:13Z","lastTransitionTime":"2025-12-01T09:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.950613 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.950666 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.950676 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.950692 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:13 crc kubenswrapper[4933]: I1201 09:32:13.950706 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:13Z","lastTransitionTime":"2025-12-01T09:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.054013 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.054056 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.054068 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.054519 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.054546 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:14Z","lastTransitionTime":"2025-12-01T09:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.058800 4933 generic.go:334] "Generic (PLEG): container finished" podID="cae5a541-953b-49b6-8dfa-d19cdd133d79" containerID="ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519" exitCode=0 Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.058913 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" event={"ID":"cae5a541-953b-49b6-8dfa-d19cdd133d79","Type":"ContainerDied","Data":"ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519"} Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.081860 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.096439 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.115478 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.134633 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.150574 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.158522 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.158568 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.158578 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.158593 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.158608 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:14Z","lastTransitionTime":"2025-12-01T09:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.162482 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.180731 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.192489 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.202776 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.216821 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.231707 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.247811 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.261631 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.261724 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.261756 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.261765 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.261779 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.261788 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:14Z","lastTransitionTime":"2025-12-01T09:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.271823 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:14Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.364469 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.364509 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.364518 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.364533 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.364542 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:14Z","lastTransitionTime":"2025-12-01T09:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.466422 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.466458 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.466470 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.466488 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.466501 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:14Z","lastTransitionTime":"2025-12-01T09:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.568733 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.568774 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.568787 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.568803 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.568814 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:14Z","lastTransitionTime":"2025-12-01T09:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.671443 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.671488 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.671500 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.671514 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.671525 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:14Z","lastTransitionTime":"2025-12-01T09:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.774037 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.774064 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.774072 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.774084 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.774093 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:14Z","lastTransitionTime":"2025-12-01T09:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.876548 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.876598 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.876612 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.876629 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.876642 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:14Z","lastTransitionTime":"2025-12-01T09:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.979644 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.979740 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.979765 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.979796 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:14 crc kubenswrapper[4933]: I1201 09:32:14.979819 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:14Z","lastTransitionTime":"2025-12-01T09:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.065713 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerStarted","Data":"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1"} Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.068791 4933 generic.go:334] "Generic (PLEG): container finished" podID="cae5a541-953b-49b6-8dfa-d19cdd133d79" containerID="b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53" exitCode=0 Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.068822 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" event={"ID":"cae5a541-953b-49b6-8dfa-d19cdd133d79","Type":"ContainerDied","Data":"b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53"} Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.082087 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.082143 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.082155 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.082175 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.082191 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:15Z","lastTransitionTime":"2025-12-01T09:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.083959 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.097030 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.112898 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.123867 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.137870 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.150659 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.165054 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.178061 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.184539 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.184565 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.184573 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.184587 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.184597 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:15Z","lastTransitionTime":"2025-12-01T09:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.190629 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.204275 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.215389 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.228435 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.250664 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.267077 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:15Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.286853 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.286881 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.286888 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.286903 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.286911 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:15Z","lastTransitionTime":"2025-12-01T09:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.388777 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.388813 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.388822 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.388835 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.388845 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:15Z","lastTransitionTime":"2025-12-01T09:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.490871 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.490906 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.490915 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.490929 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.490939 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:15Z","lastTransitionTime":"2025-12-01T09:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.592963 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.593003 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.593011 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.593028 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.593037 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:15Z","lastTransitionTime":"2025-12-01T09:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.670225 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.670430 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.670479 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:15 crc kubenswrapper[4933]: E1201 09:32:15.670490 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:15 crc kubenswrapper[4933]: E1201 09:32:15.670546 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:15 crc kubenswrapper[4933]: E1201 09:32:15.670582 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.695756 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.695800 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.695811 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.695829 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.695845 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:15Z","lastTransitionTime":"2025-12-01T09:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.798139 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.798187 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.798195 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.798211 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.798221 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:15Z","lastTransitionTime":"2025-12-01T09:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.900825 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.900872 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.900886 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.900902 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:15 crc kubenswrapper[4933]: I1201 09:32:15.900911 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:15Z","lastTransitionTime":"2025-12-01T09:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.003808 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.003862 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.003877 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.003899 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.003912 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:16Z","lastTransitionTime":"2025-12-01T09:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.106146 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.106202 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.106218 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.106239 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.106255 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:16Z","lastTransitionTime":"2025-12-01T09:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.209466 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.209505 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.209515 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.209529 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.209542 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:16Z","lastTransitionTime":"2025-12-01T09:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.312657 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.312697 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.312710 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.312730 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.312744 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:16Z","lastTransitionTime":"2025-12-01T09:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.415547 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.415591 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.415604 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.415620 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.415633 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:16Z","lastTransitionTime":"2025-12-01T09:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.517888 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.517939 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.517957 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.517981 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.517998 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:16Z","lastTransitionTime":"2025-12-01T09:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.620769 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.620809 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.620819 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.620836 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.620847 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:16Z","lastTransitionTime":"2025-12-01T09:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.724523 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.725038 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.725062 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.725092 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.725120 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:16Z","lastTransitionTime":"2025-12-01T09:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.827802 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.827853 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.827868 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.827891 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.827903 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:16Z","lastTransitionTime":"2025-12-01T09:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.930838 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.930922 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.930936 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.930971 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:16 crc kubenswrapper[4933]: I1201 09:32:16.930988 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:16Z","lastTransitionTime":"2025-12-01T09:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.034694 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.034914 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.034977 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.035037 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.035146 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:17Z","lastTransitionTime":"2025-12-01T09:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.048434 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.048773 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:32:25.048739747 +0000 UTC m=+35.690463372 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.048945 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.049096 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.049223 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.049126 4933 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.049352 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.049410 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:25.049400194 +0000 UTC m=+35.691123809 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.049240 4933 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.049342 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.049598 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.049612 4933 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.049650 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:25.049544277 +0000 UTC m=+35.691267892 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.049665 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:25.04965719 +0000 UTC m=+35.691380805 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.049986 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.050136 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.050212 4933 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.050503 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:25.05048462 +0000 UTC m=+35.692208405 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.079436 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" event={"ID":"cae5a541-953b-49b6-8dfa-d19cdd133d79","Type":"ContainerStarted","Data":"521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee"} Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.083133 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerStarted","Data":"36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8"} Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.099112 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.114959 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.128580 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.137815 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.137859 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.137870 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.137888 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.137899 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:17Z","lastTransitionTime":"2025-12-01T09:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.143224 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.160295 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.180188 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.194774 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.207127 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.222561 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.235034 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.240656 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.240703 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.240716 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.240733 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.240745 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:17Z","lastTransitionTime":"2025-12-01T09:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.258192 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.271601 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.282752 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.295128 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:17Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.343394 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.343438 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.343449 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.343465 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.343478 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:17Z","lastTransitionTime":"2025-12-01T09:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.446184 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.446554 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.446565 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.446581 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.446591 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:17Z","lastTransitionTime":"2025-12-01T09:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.549089 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.549145 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.549169 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.549194 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.549205 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:17Z","lastTransitionTime":"2025-12-01T09:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.652398 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.652432 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.652441 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.652455 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.652463 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:17Z","lastTransitionTime":"2025-12-01T09:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.667615 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.667611 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.668046 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.668082 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.667629 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:17 crc kubenswrapper[4933]: E1201 09:32:17.668283 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.755299 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.755769 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.755969 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.756143 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.756395 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:17Z","lastTransitionTime":"2025-12-01T09:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.858631 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.858668 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.858681 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.858698 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.858708 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:17Z","lastTransitionTime":"2025-12-01T09:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.961831 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.961921 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.961934 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.961956 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:17 crc kubenswrapper[4933]: I1201 09:32:17.961968 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:17Z","lastTransitionTime":"2025-12-01T09:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.064984 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.065061 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.065075 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.065101 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.065118 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:18Z","lastTransitionTime":"2025-12-01T09:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.090251 4933 generic.go:334] "Generic (PLEG): container finished" podID="cae5a541-953b-49b6-8dfa-d19cdd133d79" containerID="521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee" exitCode=0 Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.090348 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" event={"ID":"cae5a541-953b-49b6-8dfa-d19cdd133d79","Type":"ContainerDied","Data":"521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee"} Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.090947 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.090973 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.105631 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.117690 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.119506 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.121613 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.135936 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.147227 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.157551 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.167632 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.167666 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.167677 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.167693 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.167705 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:18Z","lastTransitionTime":"2025-12-01T09:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.170241 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.189476 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.201628 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.214025 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.225026 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.237411 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.245929 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.260344 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.270069 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.270108 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.270117 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.270131 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.270141 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:18Z","lastTransitionTime":"2025-12-01T09:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.273041 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.287041 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.303597 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.316700 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.331557 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.344872 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.356123 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.370682 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.372512 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.372549 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.372559 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.372575 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.372587 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:18Z","lastTransitionTime":"2025-12-01T09:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.385467 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.398908 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.414825 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.431964 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.445774 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.458508 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.475057 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.475112 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.475125 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.475146 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.475158 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:18Z","lastTransitionTime":"2025-12-01T09:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.477054 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:18Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.577702 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.577738 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.577746 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.577760 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.577770 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:18Z","lastTransitionTime":"2025-12-01T09:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.680228 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.680270 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.680282 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.680322 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.680345 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:18Z","lastTransitionTime":"2025-12-01T09:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.782868 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.782902 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.782915 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.782930 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.782941 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:18Z","lastTransitionTime":"2025-12-01T09:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.884819 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.884852 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.884863 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.884878 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.884888 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:18Z","lastTransitionTime":"2025-12-01T09:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.987667 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.987719 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.987740 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.987774 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:18 crc kubenswrapper[4933]: I1201 09:32:18.987789 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:18Z","lastTransitionTime":"2025-12-01T09:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.090617 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.090665 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.090679 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.090697 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.090707 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:19Z","lastTransitionTime":"2025-12-01T09:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.096243 4933 generic.go:334] "Generic (PLEG): container finished" podID="cae5a541-953b-49b6-8dfa-d19cdd133d79" containerID="bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88" exitCode=0 Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.096339 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" event={"ID":"cae5a541-953b-49b6-8dfa-d19cdd133d79","Type":"ContainerDied","Data":"bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88"} Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.096401 4933 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.113239 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.140711 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.156217 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.169903 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.186318 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.193336 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.193374 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.193384 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.193400 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.193411 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:19Z","lastTransitionTime":"2025-12-01T09:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.197997 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.218686 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.230946 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.248248 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.262982 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.276654 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.290384 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.295376 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.295427 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.295439 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.295460 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.295473 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:19Z","lastTransitionTime":"2025-12-01T09:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.305198 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.321185 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.398033 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.398066 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.398077 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.398092 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.398103 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:19Z","lastTransitionTime":"2025-12-01T09:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.500728 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.500776 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.500786 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.500803 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.500812 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:19Z","lastTransitionTime":"2025-12-01T09:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.604124 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.604184 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.604198 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.604228 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.604243 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:19Z","lastTransitionTime":"2025-12-01T09:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.666815 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.666974 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:19 crc kubenswrapper[4933]: E1201 09:32:19.667346 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.667035 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:19 crc kubenswrapper[4933]: E1201 09:32:19.667521 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:19 crc kubenswrapper[4933]: E1201 09:32:19.667554 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.686815 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.707660 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.707728 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.707739 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.707755 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.707766 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:19Z","lastTransitionTime":"2025-12-01T09:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.712292 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.727284 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.740191 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.751041 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.761916 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.773190 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.787997 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.803479 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.810673 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.810715 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.810726 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.810742 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.810751 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:19Z","lastTransitionTime":"2025-12-01T09:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.818140 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.831890 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.846229 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.859610 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.872792 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.913506 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.913547 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.913561 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.913579 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:19 crc kubenswrapper[4933]: I1201 09:32:19.913591 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:19Z","lastTransitionTime":"2025-12-01T09:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.017113 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.017174 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.017192 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.017215 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.017229 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:20Z","lastTransitionTime":"2025-12-01T09:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.100792 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/0.log" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.104039 4933 generic.go:334] "Generic (PLEG): container finished" podID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerID="36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8" exitCode=1 Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.104114 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8"} Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.104875 4933 scope.go:117] "RemoveContainer" containerID="36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.109531 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" event={"ID":"cae5a541-953b-49b6-8dfa-d19cdd133d79","Type":"ContainerStarted","Data":"394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e"} Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.119657 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.119712 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.119724 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.119741 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.119752 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:20Z","lastTransitionTime":"2025-12-01T09:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.120001 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.135850 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.151940 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.169547 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.186022 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.202698 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.220165 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.222085 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.222122 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.222136 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.222154 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.222166 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:20Z","lastTransitionTime":"2025-12-01T09:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.243374 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"message\\\":\\\"oval\\\\nI1201 09:32:19.786469 6172 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:19.786490 6172 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 09:32:19.786481 6172 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 09:32:19.786499 6172 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 09:32:19.786517 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1201 09:32:19.786523 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 09:32:19.786537 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 09:32:19.786545 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 09:32:19.786569 6172 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 09:32:19.786877 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 09:32:19.786932 6172 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:19.786938 6172 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 09:32:19.786955 6172 factory.go:656] Stopping watch factory\\\\nI1201 09:32:19.786971 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:19.787012 6172 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 09:32:19.787033 6172 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.256690 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.269512 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.282743 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.298299 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.315593 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.324997 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.325046 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.325071 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.325089 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.325103 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:20Z","lastTransitionTime":"2025-12-01T09:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.329774 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.347238 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.361053 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.376429 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.388025 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.402987 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.418537 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.428825 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.428870 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.428882 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.428903 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.428916 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:20Z","lastTransitionTime":"2025-12-01T09:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.441834 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.456684 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.474899 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.491839 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.504648 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.516961 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.529721 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg"] Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.530422 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.532662 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.532698 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.532709 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.532732 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.532742 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:20Z","lastTransitionTime":"2025-12-01T09:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.532846 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.533026 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.537165 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"message\\\":\\\"oval\\\\nI1201 09:32:19.786469 6172 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:19.786490 6172 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 09:32:19.786481 6172 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 09:32:19.786499 6172 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 09:32:19.786517 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1201 09:32:19.786523 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 09:32:19.786537 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 09:32:19.786545 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 09:32:19.786569 6172 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 09:32:19.786877 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 09:32:19.786932 6172 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:19.786938 6172 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 09:32:19.786955 6172 factory.go:656] Stopping watch factory\\\\nI1201 09:32:19.786971 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:19.787012 6172 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 09:32:19.787033 6172 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.550268 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.560324 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.578430 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"message\\\":\\\"oval\\\\nI1201 09:32:19.786469 6172 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:19.786490 6172 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 09:32:19.786481 6172 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 09:32:19.786499 6172 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 09:32:19.786517 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1201 09:32:19.786523 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 09:32:19.786537 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 09:32:19.786545 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 09:32:19.786569 6172 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 09:32:19.786877 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 09:32:19.786932 6172 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:19.786938 6172 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 09:32:19.786955 6172 factory.go:656] Stopping watch factory\\\\nI1201 09:32:19.786971 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:19.787012 6172 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 09:32:19.787033 6172 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.590542 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.601361 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.614946 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.628828 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.635140 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.635173 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.635185 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.635202 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.635214 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:20Z","lastTransitionTime":"2025-12-01T09:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.645927 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.658628 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.673805 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.685610 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq9nj\" (UniqueName: \"kubernetes.io/projected/6349096c-1520-4206-a85c-e4b3d12e2a57-kube-api-access-sq9nj\") pod \"ovnkube-control-plane-749d76644c-8g5jg\" (UID: \"6349096c-1520-4206-a85c-e4b3d12e2a57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.685667 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6349096c-1520-4206-a85c-e4b3d12e2a57-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8g5jg\" (UID: \"6349096c-1520-4206-a85c-e4b3d12e2a57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.685651 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.685707 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6349096c-1520-4206-a85c-e4b3d12e2a57-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8g5jg\" (UID: \"6349096c-1520-4206-a85c-e4b3d12e2a57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.685874 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6349096c-1520-4206-a85c-e4b3d12e2a57-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8g5jg\" (UID: \"6349096c-1520-4206-a85c-e4b3d12e2a57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.699361 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.716593 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.733693 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.737801 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.737855 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.737869 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.737889 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.737904 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:20Z","lastTransitionTime":"2025-12-01T09:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.747122 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.761386 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:20Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.787025 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6349096c-1520-4206-a85c-e4b3d12e2a57-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8g5jg\" (UID: \"6349096c-1520-4206-a85c-e4b3d12e2a57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.787156 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6349096c-1520-4206-a85c-e4b3d12e2a57-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8g5jg\" (UID: \"6349096c-1520-4206-a85c-e4b3d12e2a57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.787204 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq9nj\" (UniqueName: \"kubernetes.io/projected/6349096c-1520-4206-a85c-e4b3d12e2a57-kube-api-access-sq9nj\") pod \"ovnkube-control-plane-749d76644c-8g5jg\" (UID: \"6349096c-1520-4206-a85c-e4b3d12e2a57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.787227 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6349096c-1520-4206-a85c-e4b3d12e2a57-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8g5jg\" (UID: \"6349096c-1520-4206-a85c-e4b3d12e2a57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.788141 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6349096c-1520-4206-a85c-e4b3d12e2a57-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8g5jg\" (UID: \"6349096c-1520-4206-a85c-e4b3d12e2a57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.788176 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6349096c-1520-4206-a85c-e4b3d12e2a57-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8g5jg\" (UID: \"6349096c-1520-4206-a85c-e4b3d12e2a57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.793880 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6349096c-1520-4206-a85c-e4b3d12e2a57-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8g5jg\" (UID: \"6349096c-1520-4206-a85c-e4b3d12e2a57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.806154 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq9nj\" (UniqueName: \"kubernetes.io/projected/6349096c-1520-4206-a85c-e4b3d12e2a57-kube-api-access-sq9nj\") pod \"ovnkube-control-plane-749d76644c-8g5jg\" (UID: \"6349096c-1520-4206-a85c-e4b3d12e2a57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.840863 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.840907 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.840918 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.840959 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.840972 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:20Z","lastTransitionTime":"2025-12-01T09:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.843365 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" Dec 01 09:32:20 crc kubenswrapper[4933]: W1201 09:32:20.857892 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6349096c_1520_4206_a85c_e4b3d12e2a57.slice/crio-7cfac704646c8ac067effc36f92e55636e29d0e08064ea92ffc78ea25ad0795e WatchSource:0}: Error finding container 7cfac704646c8ac067effc36f92e55636e29d0e08064ea92ffc78ea25ad0795e: Status 404 returned error can't find the container with id 7cfac704646c8ac067effc36f92e55636e29d0e08064ea92ffc78ea25ad0795e Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.943536 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.943562 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.943573 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.943589 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:20 crc kubenswrapper[4933]: I1201 09:32:20.943598 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:20Z","lastTransitionTime":"2025-12-01T09:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.045879 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.046408 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.046425 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.046451 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.046472 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.115435 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" event={"ID":"6349096c-1520-4206-a85c-e4b3d12e2a57","Type":"ContainerStarted","Data":"7cfac704646c8ac067effc36f92e55636e29d0e08064ea92ffc78ea25ad0795e"} Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.118271 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/0.log" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.120821 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerStarted","Data":"6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7"} Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.120896 4933 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.135395 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.146146 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.148848 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.148900 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.148910 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.148927 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.148938 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.159100 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.170089 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.182145 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.195544 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.207377 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.219849 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.232200 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.247299 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.250669 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.250708 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.250719 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.250734 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.250747 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.261100 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.274528 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.288386 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.306008 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"message\\\":\\\"oval\\\\nI1201 09:32:19.786469 6172 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:19.786490 6172 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 09:32:19.786481 6172 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 09:32:19.786499 6172 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 09:32:19.786517 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1201 09:32:19.786523 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 09:32:19.786537 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 09:32:19.786545 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 09:32:19.786569 6172 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 09:32:19.786877 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 09:32:19.786932 6172 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:19.786938 6172 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 09:32:19.786955 6172 factory.go:656] Stopping watch factory\\\\nI1201 09:32:19.786971 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:19.787012 6172 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 09:32:19.787033 6172 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.318280 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.353746 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.353835 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.353846 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.353862 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.353873 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.443008 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.443061 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.443073 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.443091 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.443105 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: E1201 09:32:21.456868 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.461632 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.461677 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.461687 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.461704 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.461715 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: E1201 09:32:21.476901 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.483534 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.483567 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.483575 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.483589 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.483599 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: E1201 09:32:21.499287 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.502990 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.503056 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.503076 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.503102 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.503119 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: E1201 09:32:21.521803 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.530910 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.530946 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.530957 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.530973 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.530983 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: E1201 09:32:21.544383 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:21Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:21 crc kubenswrapper[4933]: E1201 09:32:21.544545 4933 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.546217 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.546253 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.546263 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.546282 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.546293 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.649586 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.649635 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.649646 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.649665 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.649678 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.666895 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.666930 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.666961 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:21 crc kubenswrapper[4933]: E1201 09:32:21.667051 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:21 crc kubenswrapper[4933]: E1201 09:32:21.667166 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:21 crc kubenswrapper[4933]: E1201 09:32:21.667272 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.752638 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.752673 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.752682 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.752733 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.752745 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.855170 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.855204 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.855213 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.855229 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.855239 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.957621 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.957695 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.957711 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.957739 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:21 crc kubenswrapper[4933]: I1201 09:32:21.957754 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:21Z","lastTransitionTime":"2025-12-01T09:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.060491 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.060528 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.060536 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.060549 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.060559 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:22Z","lastTransitionTime":"2025-12-01T09:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.126974 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" event={"ID":"6349096c-1520-4206-a85c-e4b3d12e2a57","Type":"ContainerStarted","Data":"9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc"} Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.127028 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" event={"ID":"6349096c-1520-4206-a85c-e4b3d12e2a57","Type":"ContainerStarted","Data":"be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c"} Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.128918 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/1.log" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.129561 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/0.log" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.132574 4933 generic.go:334] "Generic (PLEG): container finished" podID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerID="6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7" exitCode=1 Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.132607 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7"} Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.132664 4933 scope.go:117] "RemoveContainer" containerID="36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.133907 4933 scope.go:117] "RemoveContainer" containerID="6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7" Dec 01 09:32:22 crc kubenswrapper[4933]: E1201 09:32:22.134071 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.144663 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.157367 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.162916 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.163450 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.163525 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.163611 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.163701 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:22Z","lastTransitionTime":"2025-12-01T09:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.169838 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.185617 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.200208 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.215517 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.235413 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"message\\\":\\\"oval\\\\nI1201 09:32:19.786469 6172 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:19.786490 6172 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 09:32:19.786481 6172 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 09:32:19.786499 6172 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 09:32:19.786517 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1201 09:32:19.786523 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 09:32:19.786537 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 09:32:19.786545 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 09:32:19.786569 6172 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 09:32:19.786877 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 09:32:19.786932 6172 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:19.786938 6172 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 09:32:19.786955 6172 factory.go:656] Stopping watch factory\\\\nI1201 09:32:19.786971 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:19.787012 6172 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 09:32:19.787033 6172 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.246788 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.257833 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.265773 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.265816 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.265825 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.265840 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.265851 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:22Z","lastTransitionTime":"2025-12-01T09:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.270755 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.283173 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.298042 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.310502 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.322357 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.333466 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.344977 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.356643 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.368782 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.368822 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.368833 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.368849 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.368862 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:22Z","lastTransitionTime":"2025-12-01T09:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.376044 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"message\\\":\\\"oval\\\\nI1201 09:32:19.786469 6172 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:19.786490 6172 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 09:32:19.786481 6172 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 09:32:19.786499 6172 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 09:32:19.786517 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1201 09:32:19.786523 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 09:32:19.786537 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 09:32:19.786545 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 09:32:19.786569 6172 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 09:32:19.786877 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 09:32:19.786932 6172 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:19.786938 6172 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 09:32:19.786955 6172 factory.go:656] Stopping watch factory\\\\nI1201 09:32:19.786971 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:19.787012 6172 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 09:32:19.787033 6172 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 09:32:21.993660 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.387336 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.390557 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bcqz5"] Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.391093 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:22 crc kubenswrapper[4933]: E1201 09:32:22.391226 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.397970 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.409672 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.422470 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.434970 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.446523 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.452100 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.458071 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.470500 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.470536 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.470545 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.470560 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.470570 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:22Z","lastTransitionTime":"2025-12-01T09:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.472965 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.484008 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.494781 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.503129 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.503188 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfvvf\" (UniqueName: \"kubernetes.io/projected/9e67470a-b3fe-4176-b546-fdf28012fce5-kube-api-access-qfvvf\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.507432 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.518730 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.528812 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.545475 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36ce28e7f8e8ec9530a417f39c1c68411b10fcf2737b1357e3a2986d5691aff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"message\\\":\\\"oval\\\\nI1201 09:32:19.786469 6172 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:19.786490 6172 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 09:32:19.786481 6172 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 09:32:19.786499 6172 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 09:32:19.786517 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1201 09:32:19.786523 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 09:32:19.786537 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 09:32:19.786545 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 09:32:19.786569 6172 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 09:32:19.786877 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 09:32:19.786932 6172 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:19.786938 6172 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 09:32:19.786955 6172 factory.go:656] Stopping watch factory\\\\nI1201 09:32:19.786971 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:19.787012 6172 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 09:32:19.787033 6172 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 09:32:21.993660 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.555901 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.565600 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.573072 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.573116 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.573129 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.573145 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.573156 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:22Z","lastTransitionTime":"2025-12-01T09:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.581741 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.591052 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.602946 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.604250 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfvvf\" (UniqueName: \"kubernetes.io/projected/9e67470a-b3fe-4176-b546-fdf28012fce5-kube-api-access-qfvvf\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.604343 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:22 crc kubenswrapper[4933]: E1201 09:32:22.604440 4933 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:22 crc kubenswrapper[4933]: E1201 09:32:22.604501 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs podName:9e67470a-b3fe-4176-b546-fdf28012fce5 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:23.104484826 +0000 UTC m=+33.746208441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs") pod "network-metrics-daemon-bcqz5" (UID: "9e67470a-b3fe-4176-b546-fdf28012fce5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.614205 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.620168 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfvvf\" (UniqueName: \"kubernetes.io/projected/9e67470a-b3fe-4176-b546-fdf28012fce5-kube-api-access-qfvvf\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.627033 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.644213 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.654799 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.666864 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.675853 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.675900 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.675912 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.675930 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.675942 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:22Z","lastTransitionTime":"2025-12-01T09:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.680482 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.694763 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.718234 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.740897 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.778784 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.778838 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.778850 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.778867 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.778879 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:22Z","lastTransitionTime":"2025-12-01T09:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.881929 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.881992 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.882011 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.882038 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.882058 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:22Z","lastTransitionTime":"2025-12-01T09:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.983793 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.983826 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.983834 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.983848 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:22 crc kubenswrapper[4933]: I1201 09:32:22.983857 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:22Z","lastTransitionTime":"2025-12-01T09:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.085860 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.085909 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.085922 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.085939 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.085953 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:23Z","lastTransitionTime":"2025-12-01T09:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.114531 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:23 crc kubenswrapper[4933]: E1201 09:32:23.114691 4933 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:23 crc kubenswrapper[4933]: E1201 09:32:23.114771 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs podName:9e67470a-b3fe-4176-b546-fdf28012fce5 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:24.114752887 +0000 UTC m=+34.756476502 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs") pod "network-metrics-daemon-bcqz5" (UID: "9e67470a-b3fe-4176-b546-fdf28012fce5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.135775 4933 scope.go:117] "RemoveContainer" containerID="6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7" Dec 01 09:32:23 crc kubenswrapper[4933]: E1201 09:32:23.135954 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.149623 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.162748 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.175351 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.185847 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.188778 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.188827 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.188908 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.188944 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.188961 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:23Z","lastTransitionTime":"2025-12-01T09:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.198879 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.215011 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.226434 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.248050 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.261321 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.274767 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.286757 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.292237 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.292387 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.292409 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.292427 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.292436 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:23Z","lastTransitionTime":"2025-12-01T09:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.301702 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.312181 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.335971 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 09:32:21.993660 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.348066 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.359161 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:23Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.394988 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.395038 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.395050 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.395069 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.395081 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:23Z","lastTransitionTime":"2025-12-01T09:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.498176 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.498220 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.498232 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.498253 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.498265 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:23Z","lastTransitionTime":"2025-12-01T09:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.601194 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.601530 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.601621 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.601691 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.601760 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:23Z","lastTransitionTime":"2025-12-01T09:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.667355 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.667451 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.667406 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:23 crc kubenswrapper[4933]: E1201 09:32:23.667581 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:23 crc kubenswrapper[4933]: E1201 09:32:23.667839 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:23 crc kubenswrapper[4933]: E1201 09:32:23.667988 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.668079 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:23 crc kubenswrapper[4933]: E1201 09:32:23.668207 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.704161 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.704201 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.704210 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.704223 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.704232 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:23Z","lastTransitionTime":"2025-12-01T09:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.806933 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.806994 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.807016 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.807045 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.807066 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:23Z","lastTransitionTime":"2025-12-01T09:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.910075 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.910139 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.910158 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.910187 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:23 crc kubenswrapper[4933]: I1201 09:32:23.910213 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:23Z","lastTransitionTime":"2025-12-01T09:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.012872 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.012921 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.012933 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.012953 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.012965 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:24Z","lastTransitionTime":"2025-12-01T09:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.115925 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.115968 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.115978 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.115991 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.116001 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:24Z","lastTransitionTime":"2025-12-01T09:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.127017 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:24 crc kubenswrapper[4933]: E1201 09:32:24.127352 4933 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:24 crc kubenswrapper[4933]: E1201 09:32:24.127557 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs podName:9e67470a-b3fe-4176-b546-fdf28012fce5 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:26.127515705 +0000 UTC m=+36.769239500 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs") pod "network-metrics-daemon-bcqz5" (UID: "9e67470a-b3fe-4176-b546-fdf28012fce5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.141408 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/1.log" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.218912 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.219631 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.219681 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.219713 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.219733 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:24Z","lastTransitionTime":"2025-12-01T09:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.322795 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.322862 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.322875 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.322894 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.322909 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:24Z","lastTransitionTime":"2025-12-01T09:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.426406 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.426475 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.426486 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.426503 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.426516 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:24Z","lastTransitionTime":"2025-12-01T09:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.530046 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.530121 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.530133 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.530148 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.530157 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:24Z","lastTransitionTime":"2025-12-01T09:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.632617 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.632661 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.632671 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.632686 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.632698 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:24Z","lastTransitionTime":"2025-12-01T09:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.734468 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.734503 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.734512 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.734524 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.734533 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:24Z","lastTransitionTime":"2025-12-01T09:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.836857 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.836915 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.836926 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.836944 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.836955 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:24Z","lastTransitionTime":"2025-12-01T09:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.939742 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.939787 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.939800 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.939817 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:24 crc kubenswrapper[4933]: I1201 09:32:24.939829 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:24Z","lastTransitionTime":"2025-12-01T09:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.042190 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.042255 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.042266 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.042282 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.042363 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:25Z","lastTransitionTime":"2025-12-01T09:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.137925 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.138132 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:32:41.13810253 +0000 UTC m=+51.779826195 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.138246 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.138360 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.138438 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.138482 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.138549 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.138563 4933 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.138626 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.138647 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.138660 4933 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.138690 4933 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.138577 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.138829 4933 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.138712 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:41.138700974 +0000 UTC m=+51.780424769 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.138951 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:41.138924599 +0000 UTC m=+51.780648314 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.138985 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:41.138972151 +0000 UTC m=+51.780695966 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.139012 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:41.139001791 +0000 UTC m=+51.780725606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.144201 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.144238 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.144246 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.144260 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.144270 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:25Z","lastTransitionTime":"2025-12-01T09:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.246404 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.246457 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.246476 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.246492 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.246503 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:25Z","lastTransitionTime":"2025-12-01T09:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.348865 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.348906 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.348914 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.348930 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.348948 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:25Z","lastTransitionTime":"2025-12-01T09:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.453776 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.453838 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.453854 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.453872 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.454279 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:25Z","lastTransitionTime":"2025-12-01T09:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.556687 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.556720 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.556730 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.556745 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.556757 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:25Z","lastTransitionTime":"2025-12-01T09:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.658569 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.658628 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.658640 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.658658 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.658670 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:25Z","lastTransitionTime":"2025-12-01T09:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.667004 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.667062 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.667155 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.667123 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.667476 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.667772 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.668065 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:25 crc kubenswrapper[4933]: E1201 09:32:25.668187 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.761926 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.762004 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.762028 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.762059 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.762084 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:25Z","lastTransitionTime":"2025-12-01T09:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.865967 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.866056 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.866080 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.866107 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.866130 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:25Z","lastTransitionTime":"2025-12-01T09:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.968708 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.968791 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.968807 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.968957 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:25 crc kubenswrapper[4933]: I1201 09:32:25.969017 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:25Z","lastTransitionTime":"2025-12-01T09:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.072507 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.072596 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.072643 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.072667 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.072679 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:26Z","lastTransitionTime":"2025-12-01T09:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.152595 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:26 crc kubenswrapper[4933]: E1201 09:32:26.152715 4933 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:26 crc kubenswrapper[4933]: E1201 09:32:26.152796 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs podName:9e67470a-b3fe-4176-b546-fdf28012fce5 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:30.152762204 +0000 UTC m=+40.794485819 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs") pod "network-metrics-daemon-bcqz5" (UID: "9e67470a-b3fe-4176-b546-fdf28012fce5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.175872 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.175954 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.175982 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.176014 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.176037 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:26Z","lastTransitionTime":"2025-12-01T09:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.278609 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.278644 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.278653 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.278666 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.278678 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:26Z","lastTransitionTime":"2025-12-01T09:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.382109 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.382216 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.382241 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.382271 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.382293 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:26Z","lastTransitionTime":"2025-12-01T09:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.485536 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.485587 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.485604 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.485631 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.485653 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:26Z","lastTransitionTime":"2025-12-01T09:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.588567 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.588642 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.588661 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.588689 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.588706 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:26Z","lastTransitionTime":"2025-12-01T09:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.691198 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.691255 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.691272 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.691294 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.691355 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:26Z","lastTransitionTime":"2025-12-01T09:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.793729 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.793785 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.793799 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.793821 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.793835 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:26Z","lastTransitionTime":"2025-12-01T09:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.896180 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.896250 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.896286 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.896388 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:26 crc kubenswrapper[4933]: I1201 09:32:26.896419 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:26Z","lastTransitionTime":"2025-12-01T09:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.000043 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.000098 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.000109 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.000128 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.000142 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:27Z","lastTransitionTime":"2025-12-01T09:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.102440 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.102502 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.102521 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.102546 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.102564 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:27Z","lastTransitionTime":"2025-12-01T09:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.205280 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.205355 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.205371 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.205389 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.205403 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:27Z","lastTransitionTime":"2025-12-01T09:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.307791 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.307844 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.307856 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.307873 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.307886 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:27Z","lastTransitionTime":"2025-12-01T09:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.410565 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.410595 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.410605 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.410623 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.410633 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:27Z","lastTransitionTime":"2025-12-01T09:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.513202 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.513258 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.513270 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.513288 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.513331 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:27Z","lastTransitionTime":"2025-12-01T09:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.617650 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.617742 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.617770 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.617805 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.617841 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:27Z","lastTransitionTime":"2025-12-01T09:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.666587 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.666672 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.666668 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.666769 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:27 crc kubenswrapper[4933]: E1201 09:32:27.666781 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:27 crc kubenswrapper[4933]: E1201 09:32:27.666882 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:27 crc kubenswrapper[4933]: E1201 09:32:27.667000 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:27 crc kubenswrapper[4933]: E1201 09:32:27.667172 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.720399 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.720435 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.720443 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.720456 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.720465 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:27Z","lastTransitionTime":"2025-12-01T09:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.824112 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.824658 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.824688 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.824723 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.824746 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:27Z","lastTransitionTime":"2025-12-01T09:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.927644 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.927689 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.927703 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.927722 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:27 crc kubenswrapper[4933]: I1201 09:32:27.927736 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:27Z","lastTransitionTime":"2025-12-01T09:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.029846 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.029889 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.029901 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.029925 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.029936 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:28Z","lastTransitionTime":"2025-12-01T09:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.047228 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.048677 4933 scope.go:117] "RemoveContainer" containerID="6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7" Dec 01 09:32:28 crc kubenswrapper[4933]: E1201 09:32:28.049100 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.132589 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.132622 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.132630 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.132643 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.132651 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:28Z","lastTransitionTime":"2025-12-01T09:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.235987 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.236041 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.236056 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.236073 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.236084 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:28Z","lastTransitionTime":"2025-12-01T09:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.338192 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.338258 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.338279 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.338303 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.338355 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:28Z","lastTransitionTime":"2025-12-01T09:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.441238 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.441281 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.441291 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.441321 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.441334 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:28Z","lastTransitionTime":"2025-12-01T09:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.543818 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.543871 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.543880 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.543899 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.543910 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:28Z","lastTransitionTime":"2025-12-01T09:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.646093 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.646171 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.646184 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.646203 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.646216 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:28Z","lastTransitionTime":"2025-12-01T09:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.748781 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.748814 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.748822 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.748835 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.748845 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:28Z","lastTransitionTime":"2025-12-01T09:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.850920 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.850957 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.850967 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.850981 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.850991 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:28Z","lastTransitionTime":"2025-12-01T09:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.953592 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.953869 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.953883 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.953898 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:28 crc kubenswrapper[4933]: I1201 09:32:28.953909 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:28Z","lastTransitionTime":"2025-12-01T09:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.055800 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.055861 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.055876 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.055900 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.055914 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:29Z","lastTransitionTime":"2025-12-01T09:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.158446 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.158491 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.158515 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.158537 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.158581 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:29Z","lastTransitionTime":"2025-12-01T09:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.261473 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.261540 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.261556 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.261582 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.261600 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:29Z","lastTransitionTime":"2025-12-01T09:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.363888 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.363960 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.363984 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.364012 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.364035 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:29Z","lastTransitionTime":"2025-12-01T09:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.466658 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.466702 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.466712 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.466728 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.466740 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:29Z","lastTransitionTime":"2025-12-01T09:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.569879 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.569922 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.569937 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.569951 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.569960 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:29Z","lastTransitionTime":"2025-12-01T09:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.666908 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.667016 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.667066 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.667212 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:29 crc kubenswrapper[4933]: E1201 09:32:29.667652 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:29 crc kubenswrapper[4933]: E1201 09:32:29.667875 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:29 crc kubenswrapper[4933]: E1201 09:32:29.667969 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:29 crc kubenswrapper[4933]: E1201 09:32:29.667792 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.672338 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.672375 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.672387 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.672401 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.672412 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:29Z","lastTransitionTime":"2025-12-01T09:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.681150 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.692985 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.706534 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.722747 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.737237 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.749229 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.761745 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.774215 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.775233 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.775267 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.775276 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.775291 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.775321 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:29Z","lastTransitionTime":"2025-12-01T09:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.786442 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.796399 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.808043 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.818448 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.827562 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.837776 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.853158 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.877853 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.877894 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.877904 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.877917 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.877926 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:29Z","lastTransitionTime":"2025-12-01T09:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.880276 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 09:32:21.993660 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.980878 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.980926 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.980941 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.980962 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:29 crc kubenswrapper[4933]: I1201 09:32:29.980979 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:29Z","lastTransitionTime":"2025-12-01T09:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.084107 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.084180 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.084214 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.084246 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.084267 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:30Z","lastTransitionTime":"2025-12-01T09:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.187159 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.187210 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.187232 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.187254 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.187269 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:30Z","lastTransitionTime":"2025-12-01T09:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.197781 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:30 crc kubenswrapper[4933]: E1201 09:32:30.197915 4933 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:30 crc kubenswrapper[4933]: E1201 09:32:30.197969 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs podName:9e67470a-b3fe-4176-b546-fdf28012fce5 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:38.197955925 +0000 UTC m=+48.839679540 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs") pod "network-metrics-daemon-bcqz5" (UID: "9e67470a-b3fe-4176-b546-fdf28012fce5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.290156 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.290206 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.290219 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.290241 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.290254 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:30Z","lastTransitionTime":"2025-12-01T09:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.396979 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.397034 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.397044 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.397059 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.397070 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:30Z","lastTransitionTime":"2025-12-01T09:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.499339 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.499385 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.499424 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.499444 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.499457 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:30Z","lastTransitionTime":"2025-12-01T09:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.602594 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.602643 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.602656 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.602673 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.602684 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:30Z","lastTransitionTime":"2025-12-01T09:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.705243 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.705290 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.705323 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.705343 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.705354 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:30Z","lastTransitionTime":"2025-12-01T09:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.807447 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.807498 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.807508 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.807521 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.807531 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:30Z","lastTransitionTime":"2025-12-01T09:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.910293 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.910422 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.910434 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.910456 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:30 crc kubenswrapper[4933]: I1201 09:32:30.910469 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:30Z","lastTransitionTime":"2025-12-01T09:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.013662 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.013716 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.013731 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.013750 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.013764 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:31Z","lastTransitionTime":"2025-12-01T09:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.116389 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.116457 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.116470 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.116492 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.116516 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:31Z","lastTransitionTime":"2025-12-01T09:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.219139 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.219210 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.219230 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.219257 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.219276 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:31Z","lastTransitionTime":"2025-12-01T09:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.321974 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.322042 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.322057 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.322084 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.322101 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:31Z","lastTransitionTime":"2025-12-01T09:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.425695 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.425778 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.425802 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.425834 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.425860 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:31Z","lastTransitionTime":"2025-12-01T09:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.528775 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.528825 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.528838 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.528865 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.528880 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:31Z","lastTransitionTime":"2025-12-01T09:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.632559 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.632621 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.632633 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.632655 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.632668 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:31Z","lastTransitionTime":"2025-12-01T09:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.667113 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.667157 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.667362 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:31 crc kubenswrapper[4933]: E1201 09:32:31.667563 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:31 crc kubenswrapper[4933]: E1201 09:32:31.667698 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.667755 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:31 crc kubenswrapper[4933]: E1201 09:32:31.667840 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:31 crc kubenswrapper[4933]: E1201 09:32:31.668078 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.735942 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.735988 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.735999 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.736014 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.736026 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:31Z","lastTransitionTime":"2025-12-01T09:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.838905 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.838974 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.839003 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.839027 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.839040 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:31Z","lastTransitionTime":"2025-12-01T09:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.940759 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.940845 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.940872 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.940921 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.940946 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:31Z","lastTransitionTime":"2025-12-01T09:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:31 crc kubenswrapper[4933]: E1201 09:32:31.957544 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:31Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.964221 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.964386 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.964426 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.964464 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.964486 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:31Z","lastTransitionTime":"2025-12-01T09:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:31 crc kubenswrapper[4933]: E1201 09:32:31.985059 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:31Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.990656 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.990736 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.990750 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.990769 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:31 crc kubenswrapper[4933]: I1201 09:32:31.990780 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:31Z","lastTransitionTime":"2025-12-01T09:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:32 crc kubenswrapper[4933]: E1201 09:32:32.012805 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:32Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.017983 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.018039 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.018050 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.018063 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.018073 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:32Z","lastTransitionTime":"2025-12-01T09:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:32 crc kubenswrapper[4933]: E1201 09:32:32.039881 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:32Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.045663 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.045720 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.045734 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.045757 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.045771 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:32Z","lastTransitionTime":"2025-12-01T09:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:32 crc kubenswrapper[4933]: E1201 09:32:32.059129 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:32Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:32 crc kubenswrapper[4933]: E1201 09:32:32.059270 4933 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.061390 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.061439 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.061453 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.061473 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.061485 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:32Z","lastTransitionTime":"2025-12-01T09:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.163857 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.163908 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.163918 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.163932 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.163943 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:32Z","lastTransitionTime":"2025-12-01T09:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.266871 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.266922 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.266939 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.266966 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.266983 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:32Z","lastTransitionTime":"2025-12-01T09:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.370607 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.370665 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.370680 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.370699 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.370712 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:32Z","lastTransitionTime":"2025-12-01T09:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.474201 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.474246 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.474256 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.474270 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.474281 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:32Z","lastTransitionTime":"2025-12-01T09:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.576625 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.576664 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.576672 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.576688 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.576697 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:32Z","lastTransitionTime":"2025-12-01T09:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.679001 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.679057 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.679069 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.679086 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.679099 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:32Z","lastTransitionTime":"2025-12-01T09:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.781066 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.781123 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.781135 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.781152 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.781165 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:32Z","lastTransitionTime":"2025-12-01T09:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.883580 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.883633 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.883644 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.883662 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.883677 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:32Z","lastTransitionTime":"2025-12-01T09:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.986516 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.986565 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.986574 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.986590 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:32 crc kubenswrapper[4933]: I1201 09:32:32.986603 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:32Z","lastTransitionTime":"2025-12-01T09:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.089348 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.089405 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.089423 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.089440 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.089451 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:33Z","lastTransitionTime":"2025-12-01T09:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.192000 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.192077 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.192097 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.192121 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.192137 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:33Z","lastTransitionTime":"2025-12-01T09:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.294789 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.294832 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.294844 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.294860 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.294871 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:33Z","lastTransitionTime":"2025-12-01T09:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.398175 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.398256 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.398265 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.398280 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.398290 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:33Z","lastTransitionTime":"2025-12-01T09:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.500386 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.500433 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.500443 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.500458 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.500471 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:33Z","lastTransitionTime":"2025-12-01T09:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.603231 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.603286 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.603333 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.603357 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.603373 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:33Z","lastTransitionTime":"2025-12-01T09:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.667157 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.667199 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.667199 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.667240 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:33 crc kubenswrapper[4933]: E1201 09:32:33.667367 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:33 crc kubenswrapper[4933]: E1201 09:32:33.667636 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:33 crc kubenswrapper[4933]: E1201 09:32:33.667730 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:33 crc kubenswrapper[4933]: E1201 09:32:33.667842 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.706252 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.706339 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.706352 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.706368 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.706378 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:33Z","lastTransitionTime":"2025-12-01T09:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.809407 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.809492 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.809516 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.809546 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.809571 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:33Z","lastTransitionTime":"2025-12-01T09:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.912518 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.912598 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.912620 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.912650 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:33 crc kubenswrapper[4933]: I1201 09:32:33.912677 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:33Z","lastTransitionTime":"2025-12-01T09:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.014954 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.014990 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.015003 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.015019 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.015031 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:34Z","lastTransitionTime":"2025-12-01T09:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.117786 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.117858 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.117878 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.117902 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.117921 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:34Z","lastTransitionTime":"2025-12-01T09:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.220117 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.220185 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.220206 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.220229 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.220246 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:34Z","lastTransitionTime":"2025-12-01T09:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.322688 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.322733 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.322743 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.322756 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.322766 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:34Z","lastTransitionTime":"2025-12-01T09:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.425062 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.425123 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.425140 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.425162 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.425176 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:34Z","lastTransitionTime":"2025-12-01T09:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.528348 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.528383 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.528395 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.528413 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.528424 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:34Z","lastTransitionTime":"2025-12-01T09:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.631363 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.631450 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.631475 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.631505 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.631528 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:34Z","lastTransitionTime":"2025-12-01T09:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.733917 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.733952 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.733962 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.733977 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.733989 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:34Z","lastTransitionTime":"2025-12-01T09:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.837394 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.837445 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.837454 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.837470 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.837479 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:34Z","lastTransitionTime":"2025-12-01T09:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.940178 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.940241 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.940252 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.940266 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:34 crc kubenswrapper[4933]: I1201 09:32:34.940275 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:34Z","lastTransitionTime":"2025-12-01T09:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.043858 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.043900 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.043909 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.043925 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.043935 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:35Z","lastTransitionTime":"2025-12-01T09:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.147696 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.147758 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.147780 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.147804 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.147823 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:35Z","lastTransitionTime":"2025-12-01T09:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.250485 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.250525 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.250535 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.250547 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.250557 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:35Z","lastTransitionTime":"2025-12-01T09:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.352234 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.352279 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.352293 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.352353 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.352369 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:35Z","lastTransitionTime":"2025-12-01T09:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.455093 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.455162 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.455172 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.455185 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.455194 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:35Z","lastTransitionTime":"2025-12-01T09:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.557517 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.557556 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.557564 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.557578 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.557587 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:35Z","lastTransitionTime":"2025-12-01T09:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.660020 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.660091 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.660104 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.660119 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.660129 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:35Z","lastTransitionTime":"2025-12-01T09:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.667545 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.667550 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.667709 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:35 crc kubenswrapper[4933]: E1201 09:32:35.667815 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.667928 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:35 crc kubenswrapper[4933]: E1201 09:32:35.667921 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:35 crc kubenswrapper[4933]: E1201 09:32:35.668041 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:35 crc kubenswrapper[4933]: E1201 09:32:35.668224 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.762481 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.762525 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.762535 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.762551 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.762564 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:35Z","lastTransitionTime":"2025-12-01T09:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.865341 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.865390 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.865400 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.865416 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.865426 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:35Z","lastTransitionTime":"2025-12-01T09:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.968625 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.968703 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.968717 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.968746 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:35 crc kubenswrapper[4933]: I1201 09:32:35.968762 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:35Z","lastTransitionTime":"2025-12-01T09:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.071280 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.071355 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.071368 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.071381 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.071391 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:36Z","lastTransitionTime":"2025-12-01T09:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.173656 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.173690 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.173697 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.173710 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.173750 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:36Z","lastTransitionTime":"2025-12-01T09:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.275536 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.275592 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.275602 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.275625 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.275637 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:36Z","lastTransitionTime":"2025-12-01T09:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.377940 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.377986 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.377998 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.378015 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.378026 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:36Z","lastTransitionTime":"2025-12-01T09:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.479910 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.479958 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.479966 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.479981 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.479990 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:36Z","lastTransitionTime":"2025-12-01T09:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.581958 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.581999 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.582031 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.582044 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.582053 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:36Z","lastTransitionTime":"2025-12-01T09:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.684032 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.684072 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.684081 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.684096 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.684105 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:36Z","lastTransitionTime":"2025-12-01T09:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.787360 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.787403 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.787413 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.787428 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.787438 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:36Z","lastTransitionTime":"2025-12-01T09:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.890088 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.890142 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.890157 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.890176 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.890188 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:36Z","lastTransitionTime":"2025-12-01T09:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.992814 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.992883 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.992901 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.992928 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:36 crc kubenswrapper[4933]: I1201 09:32:36.992946 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:36Z","lastTransitionTime":"2025-12-01T09:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.096402 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.096452 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.096465 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.096482 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.096495 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:37Z","lastTransitionTime":"2025-12-01T09:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.198841 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.198904 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.198915 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.198939 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.198955 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:37Z","lastTransitionTime":"2025-12-01T09:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.301201 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.301265 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.301275 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.301295 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.301323 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:37Z","lastTransitionTime":"2025-12-01T09:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.404340 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.404386 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.404398 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.404419 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.404433 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:37Z","lastTransitionTime":"2025-12-01T09:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.507530 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.507590 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.507603 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.507625 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.507640 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:37Z","lastTransitionTime":"2025-12-01T09:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.603903 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.609397 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.609443 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.609454 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.609471 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.609485 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:37Z","lastTransitionTime":"2025-12-01T09:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.612690 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.618953 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.631081 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.644707 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.658991 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.666832 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.666882 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.666933 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.666849 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:37 crc kubenswrapper[4933]: E1201 09:32:37.666998 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:37 crc kubenswrapper[4933]: E1201 09:32:37.667111 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:37 crc kubenswrapper[4933]: E1201 09:32:37.667148 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:37 crc kubenswrapper[4933]: E1201 09:32:37.667231 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.672214 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.684050 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.697937 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.711600 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.711655 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.711669 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.711690 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.711702 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:37Z","lastTransitionTime":"2025-12-01T09:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.712263 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.731344 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 09:32:21.993660 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.745232 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.757819 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.769129 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.782495 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.797907 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.808880 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.813731 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.813772 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.813783 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.813799 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.813812 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:37Z","lastTransitionTime":"2025-12-01T09:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.822263 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:37Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.917161 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.917210 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.917220 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.917253 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:37 crc kubenswrapper[4933]: I1201 09:32:37.917265 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:37Z","lastTransitionTime":"2025-12-01T09:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.019370 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.019413 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.019431 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.019448 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.019458 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:38Z","lastTransitionTime":"2025-12-01T09:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.121641 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.121687 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.121699 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.121720 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.121734 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:38Z","lastTransitionTime":"2025-12-01T09:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.224072 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.224169 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.224181 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.224228 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.224241 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:38Z","lastTransitionTime":"2025-12-01T09:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.278504 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:38 crc kubenswrapper[4933]: E1201 09:32:38.278758 4933 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:38 crc kubenswrapper[4933]: E1201 09:32:38.278884 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs podName:9e67470a-b3fe-4176-b546-fdf28012fce5 nodeName:}" failed. No retries permitted until 2025-12-01 09:32:54.278857792 +0000 UTC m=+64.920581407 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs") pod "network-metrics-daemon-bcqz5" (UID: "9e67470a-b3fe-4176-b546-fdf28012fce5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.327620 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.327657 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.327665 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.327682 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.327693 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:38Z","lastTransitionTime":"2025-12-01T09:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.430102 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.430163 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.430173 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.430191 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.430202 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:38Z","lastTransitionTime":"2025-12-01T09:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.532899 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.532948 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.532959 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.532975 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.532987 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:38Z","lastTransitionTime":"2025-12-01T09:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.635546 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.635606 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.635618 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.635632 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.635641 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:38Z","lastTransitionTime":"2025-12-01T09:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.738413 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.738466 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.738481 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.738504 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.738520 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:38Z","lastTransitionTime":"2025-12-01T09:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.841118 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.841177 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.841187 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.841210 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.841226 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:38Z","lastTransitionTime":"2025-12-01T09:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.943814 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.943848 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.943857 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.943871 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:38 crc kubenswrapper[4933]: I1201 09:32:38.943882 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:38Z","lastTransitionTime":"2025-12-01T09:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.047215 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.047279 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.047292 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.047334 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.047351 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:39Z","lastTransitionTime":"2025-12-01T09:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.150452 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.150524 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.150546 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.150576 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.150592 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:39Z","lastTransitionTime":"2025-12-01T09:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.253792 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.253854 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.253891 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.253906 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.253919 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:39Z","lastTransitionTime":"2025-12-01T09:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.355984 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.356039 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.356052 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.356070 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.356082 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:39Z","lastTransitionTime":"2025-12-01T09:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.458948 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.458995 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.459011 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.459034 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.459046 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:39Z","lastTransitionTime":"2025-12-01T09:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.561645 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.561708 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.561719 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.561733 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.561744 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:39Z","lastTransitionTime":"2025-12-01T09:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.664345 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.664406 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.664417 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.664431 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.664444 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:39Z","lastTransitionTime":"2025-12-01T09:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.666949 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.666985 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.666985 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.666949 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:39 crc kubenswrapper[4933]: E1201 09:32:39.667116 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:39 crc kubenswrapper[4933]: E1201 09:32:39.667224 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:39 crc kubenswrapper[4933]: E1201 09:32:39.667344 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:39 crc kubenswrapper[4933]: E1201 09:32:39.667478 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.680901 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.701003 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 09:32:21.993660 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.715874 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.727001 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.738035 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.750898 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.761442 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.766435 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.766489 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.766502 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.766520 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.766869 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:39Z","lastTransitionTime":"2025-12-01T09:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.775558 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.787148 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.802380 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.816096 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.827138 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.842849 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.857072 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.869405 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.869433 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.869442 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.869455 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.869464 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:39Z","lastTransitionTime":"2025-12-01T09:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.874633 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.891646 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.907361 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:39Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.971407 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.971446 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.971458 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.971474 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:39 crc kubenswrapper[4933]: I1201 09:32:39.971486 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:39Z","lastTransitionTime":"2025-12-01T09:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.073709 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.073745 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.073757 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.073772 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.073785 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:40Z","lastTransitionTime":"2025-12-01T09:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.175809 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.175845 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.175853 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.175867 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.175875 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:40Z","lastTransitionTime":"2025-12-01T09:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.279681 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.280067 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.280462 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.280549 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.280568 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:40Z","lastTransitionTime":"2025-12-01T09:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.383513 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.383830 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.383912 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.383989 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.384080 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:40Z","lastTransitionTime":"2025-12-01T09:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.485968 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.486011 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.486023 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.486039 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.486050 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:40Z","lastTransitionTime":"2025-12-01T09:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.588550 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.588585 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.588595 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.588609 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.588618 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:40Z","lastTransitionTime":"2025-12-01T09:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.692028 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.692096 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.692152 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.692174 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.692187 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:40Z","lastTransitionTime":"2025-12-01T09:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.794572 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.794645 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.794656 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.794674 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.794687 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:40Z","lastTransitionTime":"2025-12-01T09:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.897460 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.897515 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.897530 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.897549 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:40 crc kubenswrapper[4933]: I1201 09:32:40.897560 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:40Z","lastTransitionTime":"2025-12-01T09:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.000572 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.000617 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.000632 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.000647 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.000657 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:41Z","lastTransitionTime":"2025-12-01T09:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.103565 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.103604 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.103615 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.103630 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.103641 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:41Z","lastTransitionTime":"2025-12-01T09:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.206040 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.206118 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.206135 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.206154 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.206167 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:41Z","lastTransitionTime":"2025-12-01T09:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.210366 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.210424 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.210446 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.210457 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:33:13.210442592 +0000 UTC m=+83.852166207 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.210484 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.210520 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.210530 4933 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.210563 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:33:13.210555025 +0000 UTC m=+83.852278640 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.210578 4933 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.210621 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.210634 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.210644 4933 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.210651 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:33:13.210632937 +0000 UTC m=+83.852356552 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.210670 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 09:33:13.210663158 +0000 UTC m=+83.852386773 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.210691 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.210703 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.210710 4933 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.210731 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 09:33:13.210724569 +0000 UTC m=+83.852448174 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.308279 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.308331 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.308345 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.308359 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.308367 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:41Z","lastTransitionTime":"2025-12-01T09:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.410297 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.410363 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.410373 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.410386 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.410395 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:41Z","lastTransitionTime":"2025-12-01T09:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.512661 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.512714 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.512722 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.512736 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.512745 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:41Z","lastTransitionTime":"2025-12-01T09:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.615000 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.615072 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.615084 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.615101 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.615111 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:41Z","lastTransitionTime":"2025-12-01T09:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.666951 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.667047 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.667130 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.667114 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.667199 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.667278 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.667380 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:41 crc kubenswrapper[4933]: E1201 09:32:41.667482 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.716909 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.716949 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.716960 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.716976 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.716988 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:41Z","lastTransitionTime":"2025-12-01T09:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.819440 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.819487 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.819496 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.819510 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.819519 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:41Z","lastTransitionTime":"2025-12-01T09:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.921772 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.921823 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.921834 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.921854 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:41 crc kubenswrapper[4933]: I1201 09:32:41.921868 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:41Z","lastTransitionTime":"2025-12-01T09:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.024644 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.024726 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.024738 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.024758 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.024770 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.126667 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.126743 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.126762 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.126787 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.126806 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.199603 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.199653 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.199665 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.199684 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.199697 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: E1201 09:32:42.214409 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:42Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.218534 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.218580 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.218591 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.218607 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.218618 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: E1201 09:32:42.231876 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:42Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.235275 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.235366 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.235380 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.235397 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.235408 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: E1201 09:32:42.247278 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:42Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.250847 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.250892 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.250901 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.250916 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.250925 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: E1201 09:32:42.262845 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:42Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.265957 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.266029 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.266045 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.266058 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.266067 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: E1201 09:32:42.280832 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:42Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:42 crc kubenswrapper[4933]: E1201 09:32:42.281022 4933 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.282898 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.282941 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.282952 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.282970 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.282981 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.385030 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.385074 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.385085 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.385101 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.385114 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.487222 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.487275 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.487286 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.487315 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.487325 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.589464 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.589510 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.589521 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.589543 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.589556 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.668348 4933 scope.go:117] "RemoveContainer" containerID="6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.692045 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.692109 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.692121 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.692139 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.692151 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.794692 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.795054 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.795067 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.795085 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.795096 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.897331 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.897401 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.897415 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.897438 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:42 crc kubenswrapper[4933]: I1201 09:32:42.897449 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:42Z","lastTransitionTime":"2025-12-01T09:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.000069 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.000123 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.000132 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.000153 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.000164 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:43Z","lastTransitionTime":"2025-12-01T09:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.102182 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.102234 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.102245 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.102263 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.102274 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:43Z","lastTransitionTime":"2025-12-01T09:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.204427 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.204465 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.204476 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.204492 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.204502 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:43Z","lastTransitionTime":"2025-12-01T09:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.208016 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/1.log" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.211378 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerStarted","Data":"3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0"} Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.211873 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.226976 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.240593 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.256597 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.269540 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.282656 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.294345 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.306851 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.306890 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.306900 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.306914 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.306927 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:43Z","lastTransitionTime":"2025-12-01T09:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.319215 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 09:32:21.993660 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.333233 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.346460 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.359191 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.372588 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.384046 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.397815 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.409588 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.409618 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.409626 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.409640 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.409649 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:43Z","lastTransitionTime":"2025-12-01T09:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.410742 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.424485 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.438984 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.450238 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:43Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.512018 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.512068 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.512081 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.512100 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.512112 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:43Z","lastTransitionTime":"2025-12-01T09:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.615125 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.615182 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.615207 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.615233 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.615251 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:43Z","lastTransitionTime":"2025-12-01T09:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.666870 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.666901 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.666928 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.666976 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:43 crc kubenswrapper[4933]: E1201 09:32:43.667433 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:43 crc kubenswrapper[4933]: E1201 09:32:43.667535 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:43 crc kubenswrapper[4933]: E1201 09:32:43.667680 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:43 crc kubenswrapper[4933]: E1201 09:32:43.667860 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.719449 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.719512 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.719526 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.719551 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.719575 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:43Z","lastTransitionTime":"2025-12-01T09:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.823089 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.823175 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.823198 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.823228 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.823248 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:43Z","lastTransitionTime":"2025-12-01T09:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.926531 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.926576 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.926587 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.926607 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:43 crc kubenswrapper[4933]: I1201 09:32:43.926620 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:43Z","lastTransitionTime":"2025-12-01T09:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.030081 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.030170 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.030197 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.030254 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.030283 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:44Z","lastTransitionTime":"2025-12-01T09:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.133592 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.133666 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.133680 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.133706 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.133722 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:44Z","lastTransitionTime":"2025-12-01T09:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.237435 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.237497 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.237512 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.237535 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.237549 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:44Z","lastTransitionTime":"2025-12-01T09:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.340935 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.341018 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.341045 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.341080 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.341108 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:44Z","lastTransitionTime":"2025-12-01T09:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.443478 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.443525 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.443534 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.443550 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.443562 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:44Z","lastTransitionTime":"2025-12-01T09:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.546058 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.546106 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.546115 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.546129 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.546142 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:44Z","lastTransitionTime":"2025-12-01T09:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.648277 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.648373 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.648391 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.648412 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.648428 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:44Z","lastTransitionTime":"2025-12-01T09:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.750020 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.750090 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.750102 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.750118 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.750130 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:44Z","lastTransitionTime":"2025-12-01T09:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.852100 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.852162 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.852173 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.852218 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.852232 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:44Z","lastTransitionTime":"2025-12-01T09:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.954597 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.954649 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.954675 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.954698 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:44 crc kubenswrapper[4933]: I1201 09:32:44.954713 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:44Z","lastTransitionTime":"2025-12-01T09:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.056454 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.056489 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.056500 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.056518 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.056530 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:45Z","lastTransitionTime":"2025-12-01T09:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.158563 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.158609 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.158622 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.158638 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.158649 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:45Z","lastTransitionTime":"2025-12-01T09:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.220114 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/2.log" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.220911 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/1.log" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.223528 4933 generic.go:334] "Generic (PLEG): container finished" podID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerID="3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0" exitCode=1 Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.223566 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0"} Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.223599 4933 scope.go:117] "RemoveContainer" containerID="6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.224400 4933 scope.go:117] "RemoveContainer" containerID="3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0" Dec 01 09:32:45 crc kubenswrapper[4933]: E1201 09:32:45.224583 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.237058 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.248429 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.258751 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.260418 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.260446 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.260455 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.260469 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.260478 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:45Z","lastTransitionTime":"2025-12-01T09:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.270895 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.281940 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.294290 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.306634 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.318071 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.332794 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.345617 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.358740 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.361951 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.361981 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.361992 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.362006 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.362015 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:45Z","lastTransitionTime":"2025-12-01T09:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.369935 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.380750 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.390946 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.409862 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 09:32:21.993660 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:44Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1201 09:32:43.504953 6602 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1201 09:32:43.505265 6602 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.505783 6602 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.509050 6602 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:43.509138 6602 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:32:43.509214 6602 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:43.509344 6602 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:32:43.509424 6602 factory.go:656] Stopping watch factory\\\\nI1201 09:32:43.514272 6602 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1201 09:32:43.514299 6602 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1201 09:32:43.514433 6602 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:43.514507 6602 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 09:32:43.514625 6602 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.423278 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.435390 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:45Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.464624 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.464665 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.464677 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.464693 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.464704 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:45Z","lastTransitionTime":"2025-12-01T09:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.568157 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.568201 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.568210 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.568228 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.568238 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:45Z","lastTransitionTime":"2025-12-01T09:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.667482 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.667625 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.667633 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.667732 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:45 crc kubenswrapper[4933]: E1201 09:32:45.667732 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:45 crc kubenswrapper[4933]: E1201 09:32:45.667841 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:45 crc kubenswrapper[4933]: E1201 09:32:45.667975 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:45 crc kubenswrapper[4933]: E1201 09:32:45.668101 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.671154 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.671199 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.671213 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.671230 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.671244 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:45Z","lastTransitionTime":"2025-12-01T09:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.775031 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.775070 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.775082 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.775099 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.775112 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:45Z","lastTransitionTime":"2025-12-01T09:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.877207 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.877242 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.877250 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.877264 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.877272 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:45Z","lastTransitionTime":"2025-12-01T09:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.979715 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.979773 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.979790 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.979813 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:45 crc kubenswrapper[4933]: I1201 09:32:45.979832 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:45Z","lastTransitionTime":"2025-12-01T09:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.082897 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.082945 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.082959 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.082982 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.082996 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:46Z","lastTransitionTime":"2025-12-01T09:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.185716 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.185771 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.185783 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.185800 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.185812 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:46Z","lastTransitionTime":"2025-12-01T09:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.228650 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/2.log" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.288759 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.288795 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.288808 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.288824 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.288838 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:46Z","lastTransitionTime":"2025-12-01T09:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.391755 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.391800 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.391814 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.391831 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.391843 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:46Z","lastTransitionTime":"2025-12-01T09:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.494940 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.495070 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.495086 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.495109 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.495126 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:46Z","lastTransitionTime":"2025-12-01T09:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.597446 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.597491 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.597499 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.597512 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.597522 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:46Z","lastTransitionTime":"2025-12-01T09:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.700037 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.700088 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.700096 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.700110 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.700121 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:46Z","lastTransitionTime":"2025-12-01T09:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.803238 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.803352 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.803381 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.803413 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.803434 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:46Z","lastTransitionTime":"2025-12-01T09:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.906163 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.906245 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.906257 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.906278 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:46 crc kubenswrapper[4933]: I1201 09:32:46.906290 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:46Z","lastTransitionTime":"2025-12-01T09:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.009159 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.009216 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.009233 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.009255 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.009272 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:47Z","lastTransitionTime":"2025-12-01T09:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.111498 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.111544 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.111554 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.111570 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.111581 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:47Z","lastTransitionTime":"2025-12-01T09:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.214190 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.214238 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.214250 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.214269 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.214282 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:47Z","lastTransitionTime":"2025-12-01T09:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.317084 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.317133 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.317148 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.317167 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.317177 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:47Z","lastTransitionTime":"2025-12-01T09:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.419340 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.419415 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.419427 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.419446 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.419461 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:47Z","lastTransitionTime":"2025-12-01T09:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.522141 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.522194 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.522209 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.522229 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.522241 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:47Z","lastTransitionTime":"2025-12-01T09:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.624209 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.624252 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.624262 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.624277 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.624288 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:47Z","lastTransitionTime":"2025-12-01T09:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.666933 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:47 crc kubenswrapper[4933]: E1201 09:32:47.667060 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.666946 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.666933 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:47 crc kubenswrapper[4933]: E1201 09:32:47.667136 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.666955 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:47 crc kubenswrapper[4933]: E1201 09:32:47.667364 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:47 crc kubenswrapper[4933]: E1201 09:32:47.667381 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.727638 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.727694 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.727707 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.727721 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.727731 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:47Z","lastTransitionTime":"2025-12-01T09:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.829832 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.829872 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.829884 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.829903 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.829911 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:47Z","lastTransitionTime":"2025-12-01T09:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.932047 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.932080 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.932091 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.932105 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:47 crc kubenswrapper[4933]: I1201 09:32:47.932116 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:47Z","lastTransitionTime":"2025-12-01T09:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.034328 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.034384 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.034393 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.034415 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.034426 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:48Z","lastTransitionTime":"2025-12-01T09:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.138000 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.138089 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.138106 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.138128 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.138141 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:48Z","lastTransitionTime":"2025-12-01T09:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.244027 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.244084 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.244099 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.244127 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.244141 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:48Z","lastTransitionTime":"2025-12-01T09:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.348284 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.348809 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.348919 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.349009 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.349103 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:48Z","lastTransitionTime":"2025-12-01T09:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.452379 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.452431 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.452445 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.452469 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.452484 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:48Z","lastTransitionTime":"2025-12-01T09:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.555667 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.555720 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.555732 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.555754 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.555765 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:48Z","lastTransitionTime":"2025-12-01T09:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.658875 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.658933 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.658944 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.658959 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.658971 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:48Z","lastTransitionTime":"2025-12-01T09:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.761811 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.761856 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.761869 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.761886 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.761899 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:48Z","lastTransitionTime":"2025-12-01T09:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.863846 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.863893 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.863905 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.863922 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.863933 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:48Z","lastTransitionTime":"2025-12-01T09:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.966708 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.966773 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.966798 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.966820 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:48 crc kubenswrapper[4933]: I1201 09:32:48.966845 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:48Z","lastTransitionTime":"2025-12-01T09:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.069895 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.069967 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.069989 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.070015 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.070032 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:49Z","lastTransitionTime":"2025-12-01T09:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.173753 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.173814 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.173828 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.173848 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.173862 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:49Z","lastTransitionTime":"2025-12-01T09:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.276497 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.276574 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.276586 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.276613 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.276630 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:49Z","lastTransitionTime":"2025-12-01T09:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.379166 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.379226 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.379237 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.379255 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.379268 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:49Z","lastTransitionTime":"2025-12-01T09:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.482273 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.482363 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.482378 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.482395 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.482408 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:49Z","lastTransitionTime":"2025-12-01T09:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.584687 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.584766 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.584781 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.584807 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.584824 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:49Z","lastTransitionTime":"2025-12-01T09:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.667286 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:49 crc kubenswrapper[4933]: E1201 09:32:49.667574 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.667599 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.667698 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:49 crc kubenswrapper[4933]: E1201 09:32:49.667832 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.667935 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:49 crc kubenswrapper[4933]: E1201 09:32:49.671202 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:49 crc kubenswrapper[4933]: E1201 09:32:49.671508 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.687536 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.687586 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.687598 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.687620 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.687634 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:49Z","lastTransitionTime":"2025-12-01T09:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.688514 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.707008 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.721861 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.737378 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.749522 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.763326 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.778581 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.790313 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.790355 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.790366 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.790380 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.790389 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:49Z","lastTransitionTime":"2025-12-01T09:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.793056 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.806783 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.819341 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.831585 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.842909 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.854771 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.866973 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.893212 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.893251 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.893263 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.893279 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.893291 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:49Z","lastTransitionTime":"2025-12-01T09:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.906458 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6268f9b8702a410e58b0c9d7f1d98f1187ce90b8cda4009de507da7c854479a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 09:32:21.993660 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:44Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1201 09:32:43.504953 6602 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1201 09:32:43.505265 6602 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.505783 6602 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.509050 6602 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:43.509138 6602 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:32:43.509214 6602 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:43.509344 6602 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:32:43.509424 6602 factory.go:656] Stopping watch factory\\\\nI1201 09:32:43.514272 6602 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1201 09:32:43.514299 6602 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1201 09:32:43.514433 6602 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:43.514507 6602 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 09:32:43.514625 6602 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.933975 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.953591 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:49Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.995931 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.995979 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.995990 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.996005 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:49 crc kubenswrapper[4933]: I1201 09:32:49.996014 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:49Z","lastTransitionTime":"2025-12-01T09:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.099707 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.100176 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.100193 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.100214 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.100227 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:50Z","lastTransitionTime":"2025-12-01T09:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.204032 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.204112 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.204134 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.204166 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.204188 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:50Z","lastTransitionTime":"2025-12-01T09:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.306879 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.306979 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.306991 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.307008 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.307021 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:50Z","lastTransitionTime":"2025-12-01T09:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.410413 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.410449 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.410458 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.410472 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.410481 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:50Z","lastTransitionTime":"2025-12-01T09:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.512427 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.512473 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.512484 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.512499 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.512511 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:50Z","lastTransitionTime":"2025-12-01T09:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.614846 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.614885 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.614893 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.614908 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.614917 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:50Z","lastTransitionTime":"2025-12-01T09:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.717160 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.717225 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.717238 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.717254 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.717265 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:50Z","lastTransitionTime":"2025-12-01T09:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.821723 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.821792 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.821809 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.821831 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.821846 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:50Z","lastTransitionTime":"2025-12-01T09:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.924056 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.924135 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.924160 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.924192 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:50 crc kubenswrapper[4933]: I1201 09:32:50.924216 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:50Z","lastTransitionTime":"2025-12-01T09:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.027020 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.027062 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.027071 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.027086 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.027096 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:51Z","lastTransitionTime":"2025-12-01T09:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.129590 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.129646 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.129659 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.129676 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.129687 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:51Z","lastTransitionTime":"2025-12-01T09:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.232288 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.232341 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.232351 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.232369 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.232383 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:51Z","lastTransitionTime":"2025-12-01T09:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.336689 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.336759 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.336780 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.336812 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.336831 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:51Z","lastTransitionTime":"2025-12-01T09:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.440999 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.441045 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.441055 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.441070 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.441082 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:51Z","lastTransitionTime":"2025-12-01T09:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.544115 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.544194 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.544216 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.544249 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.544273 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:51Z","lastTransitionTime":"2025-12-01T09:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.647530 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.647592 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.647605 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.647625 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.647639 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:51Z","lastTransitionTime":"2025-12-01T09:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.667081 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.667108 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.667203 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:51 crc kubenswrapper[4933]: E1201 09:32:51.667349 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.667415 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:51 crc kubenswrapper[4933]: E1201 09:32:51.667503 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:51 crc kubenswrapper[4933]: E1201 09:32:51.667606 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:51 crc kubenswrapper[4933]: E1201 09:32:51.667692 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.749775 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.749815 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.749825 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.749840 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.749848 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:51Z","lastTransitionTime":"2025-12-01T09:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.853535 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.853605 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.853615 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.854049 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.854242 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:51Z","lastTransitionTime":"2025-12-01T09:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.957648 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.957715 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.957728 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.957749 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:51 crc kubenswrapper[4933]: I1201 09:32:51.957766 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:51Z","lastTransitionTime":"2025-12-01T09:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.060681 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.060754 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.060770 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.060790 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.060803 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.163645 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.163746 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.163759 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.163783 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.163795 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.266829 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.266874 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.266884 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.266904 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.266915 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.336927 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.337008 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.337023 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.337045 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.337058 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: E1201 09:32:52.354437 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:52Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.359137 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.359204 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.359219 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.359244 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.359261 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: E1201 09:32:52.374594 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:52Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.380060 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.380136 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.380150 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.380171 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.380185 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: E1201 09:32:52.393786 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:52Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.399040 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.399100 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.399111 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.399132 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.399148 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: E1201 09:32:52.417525 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:52Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.423017 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.423065 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.423074 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.423093 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.423106 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: E1201 09:32:52.439108 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:52Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:52 crc kubenswrapper[4933]: E1201 09:32:52.439289 4933 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.441232 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.441271 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.441282 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.441298 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.441327 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.544601 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.544650 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.544660 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.544684 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.544703 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.647755 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.647820 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.647840 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.647882 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.647934 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.750925 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.750974 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.750984 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.750999 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.751011 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.854100 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.854165 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.854178 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.854196 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.854232 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.956857 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.956906 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.956919 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.956935 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:52 crc kubenswrapper[4933]: I1201 09:32:52.956947 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:52Z","lastTransitionTime":"2025-12-01T09:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.058928 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.058974 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.058986 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.059005 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.059014 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:53Z","lastTransitionTime":"2025-12-01T09:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.161774 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.161841 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.161855 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.161871 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.161879 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:53Z","lastTransitionTime":"2025-12-01T09:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.265362 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.265402 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.265410 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.265424 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.265433 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:53Z","lastTransitionTime":"2025-12-01T09:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.367616 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.367670 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.367684 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.367702 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.367714 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:53Z","lastTransitionTime":"2025-12-01T09:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.470635 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.470685 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.470694 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.470710 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.470721 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:53Z","lastTransitionTime":"2025-12-01T09:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.573039 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.573098 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.573108 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.573130 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.573150 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:53Z","lastTransitionTime":"2025-12-01T09:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.667333 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.667374 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.667350 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.667348 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:53 crc kubenswrapper[4933]: E1201 09:32:53.667463 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:53 crc kubenswrapper[4933]: E1201 09:32:53.667521 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:53 crc kubenswrapper[4933]: E1201 09:32:53.667620 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:53 crc kubenswrapper[4933]: E1201 09:32:53.667729 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.674681 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.674709 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.674720 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.674733 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.674742 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:53Z","lastTransitionTime":"2025-12-01T09:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.776886 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.776929 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.776940 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.776960 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.776977 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:53Z","lastTransitionTime":"2025-12-01T09:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.879394 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.879430 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.879440 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.879453 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.879462 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:53Z","lastTransitionTime":"2025-12-01T09:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.982257 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.982295 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.982324 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.982341 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:53 crc kubenswrapper[4933]: I1201 09:32:53.982352 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:53Z","lastTransitionTime":"2025-12-01T09:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.084916 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.084948 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.084977 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.085010 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.085021 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:54Z","lastTransitionTime":"2025-12-01T09:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.187447 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.187487 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.187498 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.187515 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.187526 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:54Z","lastTransitionTime":"2025-12-01T09:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.289570 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.289601 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.289611 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.289624 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.289634 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:54Z","lastTransitionTime":"2025-12-01T09:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.352259 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:54 crc kubenswrapper[4933]: E1201 09:32:54.352472 4933 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:54 crc kubenswrapper[4933]: E1201 09:32:54.352561 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs podName:9e67470a-b3fe-4176-b546-fdf28012fce5 nodeName:}" failed. No retries permitted until 2025-12-01 09:33:26.352538235 +0000 UTC m=+96.994262010 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs") pod "network-metrics-daemon-bcqz5" (UID: "9e67470a-b3fe-4176-b546-fdf28012fce5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.391982 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.392024 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.392044 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.392060 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.392071 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:54Z","lastTransitionTime":"2025-12-01T09:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.494779 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.494822 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.494831 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.494844 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.494855 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:54Z","lastTransitionTime":"2025-12-01T09:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.597463 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.597510 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.597522 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.597537 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.597547 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:54Z","lastTransitionTime":"2025-12-01T09:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.700403 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.700449 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.700463 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.700491 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.700503 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:54Z","lastTransitionTime":"2025-12-01T09:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.802819 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.802863 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.802875 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.802892 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.802903 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:54Z","lastTransitionTime":"2025-12-01T09:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.905598 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.905642 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.905654 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.905672 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:54 crc kubenswrapper[4933]: I1201 09:32:54.905685 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:54Z","lastTransitionTime":"2025-12-01T09:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.009462 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.009519 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.009536 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.009561 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.009573 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:55Z","lastTransitionTime":"2025-12-01T09:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.112542 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.112579 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.112591 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.112607 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.112619 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:55Z","lastTransitionTime":"2025-12-01T09:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.214332 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.214373 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.214384 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.214402 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.214414 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:55Z","lastTransitionTime":"2025-12-01T09:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.316579 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.316613 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.316621 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.316634 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.316646 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:55Z","lastTransitionTime":"2025-12-01T09:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.419454 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.419502 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.419515 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.419538 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.419575 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:55Z","lastTransitionTime":"2025-12-01T09:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.521777 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.521809 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.521820 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.521837 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.521847 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:55Z","lastTransitionTime":"2025-12-01T09:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.624173 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.624224 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.624236 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.624254 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.624267 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:55Z","lastTransitionTime":"2025-12-01T09:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.666769 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.666782 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:55 crc kubenswrapper[4933]: E1201 09:32:55.667073 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.666838 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.666803 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:55 crc kubenswrapper[4933]: E1201 09:32:55.667185 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:55 crc kubenswrapper[4933]: E1201 09:32:55.667255 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:55 crc kubenswrapper[4933]: E1201 09:32:55.667346 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.726515 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.726597 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.726612 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.726630 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.726643 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:55Z","lastTransitionTime":"2025-12-01T09:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.829257 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.829318 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.829332 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.829348 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.829358 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:55Z","lastTransitionTime":"2025-12-01T09:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.931406 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.931440 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.931448 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.931461 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:55 crc kubenswrapper[4933]: I1201 09:32:55.931470 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:55Z","lastTransitionTime":"2025-12-01T09:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.033547 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.033600 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.033612 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.033627 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.033638 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:56Z","lastTransitionTime":"2025-12-01T09:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.136566 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.136604 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.136614 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.136631 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.136642 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:56Z","lastTransitionTime":"2025-12-01T09:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.238457 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.238490 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.238498 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.238511 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.238519 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:56Z","lastTransitionTime":"2025-12-01T09:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.341190 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.341234 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.341245 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.341260 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.341272 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:56Z","lastTransitionTime":"2025-12-01T09:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.444016 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.444055 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.444064 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.444077 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.444089 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:56Z","lastTransitionTime":"2025-12-01T09:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.547215 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.547270 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.547284 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.547331 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.547348 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:56Z","lastTransitionTime":"2025-12-01T09:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.649712 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.649748 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.649757 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.649770 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.649779 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:56Z","lastTransitionTime":"2025-12-01T09:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.752222 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.752296 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.752331 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.752356 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.752370 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:56Z","lastTransitionTime":"2025-12-01T09:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.854899 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.854938 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.854949 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.854962 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.854971 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:56Z","lastTransitionTime":"2025-12-01T09:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.957214 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.957253 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.957261 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.957284 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:56 crc kubenswrapper[4933]: I1201 09:32:56.957296 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:56Z","lastTransitionTime":"2025-12-01T09:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.059988 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.060035 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.060046 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.060064 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.060077 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:57Z","lastTransitionTime":"2025-12-01T09:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.162330 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.162375 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.162385 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.162402 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.162416 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:57Z","lastTransitionTime":"2025-12-01T09:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.265159 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.265190 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.265200 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.265222 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.265232 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:57Z","lastTransitionTime":"2025-12-01T09:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.368867 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.368914 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.368926 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.368942 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.368953 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:57Z","lastTransitionTime":"2025-12-01T09:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.472471 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.472514 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.472562 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.472582 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.472594 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:57Z","lastTransitionTime":"2025-12-01T09:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.575604 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.575644 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.575656 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.575672 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.575682 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:57Z","lastTransitionTime":"2025-12-01T09:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.667090 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.667124 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.667282 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:57 crc kubenswrapper[4933]: E1201 09:32:57.667528 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.667580 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:57 crc kubenswrapper[4933]: E1201 09:32:57.667682 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:57 crc kubenswrapper[4933]: E1201 09:32:57.668187 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:57 crc kubenswrapper[4933]: E1201 09:32:57.668292 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.668464 4933 scope.go:117] "RemoveContainer" containerID="3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0" Dec 01 09:32:57 crc kubenswrapper[4933]: E1201 09:32:57.668815 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.678766 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.678819 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.678829 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.678846 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.678861 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:57Z","lastTransitionTime":"2025-12-01T09:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.684848 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.698779 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.714747 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.729049 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.745221 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.759569 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.779938 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:44Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1201 09:32:43.504953 6602 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1201 09:32:43.505265 6602 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.505783 6602 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.509050 6602 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:43.509138 6602 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:32:43.509214 6602 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:43.509344 6602 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:32:43.509424 6602 factory.go:656] Stopping watch factory\\\\nI1201 09:32:43.514272 6602 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1201 09:32:43.514299 6602 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1201 09:32:43.514433 6602 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:43.514507 6602 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 09:32:43.514625 6602 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.782005 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.782068 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.782087 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.782112 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.782129 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:57Z","lastTransitionTime":"2025-12-01T09:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.795660 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.809128 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.823205 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.840503 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.852880 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.865373 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.878733 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.885741 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.885778 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.885790 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.885808 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.885817 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:57Z","lastTransitionTime":"2025-12-01T09:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.893824 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.915389 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.935624 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:57Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.989529 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.989578 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.989619 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.989637 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:57 crc kubenswrapper[4933]: I1201 09:32:57.989650 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:57Z","lastTransitionTime":"2025-12-01T09:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.091974 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.092028 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.092039 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.092057 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.092068 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:58Z","lastTransitionTime":"2025-12-01T09:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.194660 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.194721 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.194735 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.194754 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.194767 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:58Z","lastTransitionTime":"2025-12-01T09:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.279552 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4fncv_f0c7b4b8-8e07-4bd4-b811-cdb373873e8a/kube-multus/0.log" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.279628 4933 generic.go:334] "Generic (PLEG): container finished" podID="f0c7b4b8-8e07-4bd4-b811-cdb373873e8a" containerID="8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255" exitCode=1 Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.279674 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4fncv" event={"ID":"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a","Type":"ContainerDied","Data":"8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255"} Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.280221 4933 scope.go:117] "RemoveContainer" containerID="8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.294930 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.297258 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.297297 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.297323 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.297346 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.297358 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:58Z","lastTransitionTime":"2025-12-01T09:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.319709 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.333770 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.347899 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.365932 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:57Z\\\",\\\"message\\\":\\\"2025-12-01T09:32:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1\\\\n2025-12-01T09:32:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1 to /host/opt/cni/bin/\\\\n2025-12-01T09:32:12Z [verbose] multus-daemon started\\\\n2025-12-01T09:32:12Z [verbose] Readiness Indicator file check\\\\n2025-12-01T09:32:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.382995 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.401554 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.401632 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.401643 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.401702 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.401716 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:58Z","lastTransitionTime":"2025-12-01T09:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.401880 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.418847 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.435610 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.452367 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.467915 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.492297 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:44Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1201 09:32:43.504953 6602 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1201 09:32:43.505265 6602 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.505783 6602 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.509050 6602 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:43.509138 6602 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:32:43.509214 6602 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:43.509344 6602 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:32:43.509424 6602 factory.go:656] Stopping watch factory\\\\nI1201 09:32:43.514272 6602 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1201 09:32:43.514299 6602 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1201 09:32:43.514433 6602 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:43.514507 6602 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 09:32:43.514625 6602 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.505372 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.505774 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.505889 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.505985 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.506074 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:58Z","lastTransitionTime":"2025-12-01T09:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.507993 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.521686 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.534768 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.551205 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.562666 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:58Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.608696 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.608739 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.608750 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.608767 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.608776 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:58Z","lastTransitionTime":"2025-12-01T09:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.712374 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.712955 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.712971 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.712994 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.713005 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:58Z","lastTransitionTime":"2025-12-01T09:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.816582 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.816647 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.816660 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.816681 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.816721 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:58Z","lastTransitionTime":"2025-12-01T09:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.918917 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.918952 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.918960 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.918975 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:58 crc kubenswrapper[4933]: I1201 09:32:58.918984 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:58Z","lastTransitionTime":"2025-12-01T09:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.021600 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.021650 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.021659 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.021675 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.021685 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:59Z","lastTransitionTime":"2025-12-01T09:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.124379 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.124415 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.124425 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.124440 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.124450 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:59Z","lastTransitionTime":"2025-12-01T09:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.226878 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.226926 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.226939 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.226955 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.226967 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:59Z","lastTransitionTime":"2025-12-01T09:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.286074 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4fncv_f0c7b4b8-8e07-4bd4-b811-cdb373873e8a/kube-multus/0.log" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.286164 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4fncv" event={"ID":"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a","Type":"ContainerStarted","Data":"1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f"} Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.298964 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.310510 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.325148 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.328950 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.328999 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.329011 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.329033 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.329046 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:59Z","lastTransitionTime":"2025-12-01T09:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.340936 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.356880 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.369717 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.384102 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.407417 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:44Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1201 09:32:43.504953 6602 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1201 09:32:43.505265 6602 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.505783 6602 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.509050 6602 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:43.509138 6602 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:32:43.509214 6602 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:43.509344 6602 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:32:43.509424 6602 factory.go:656] Stopping watch factory\\\\nI1201 09:32:43.514272 6602 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1201 09:32:43.514299 6602 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1201 09:32:43.514433 6602 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:43.514507 6602 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 09:32:43.514625 6602 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.421215 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.431580 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.431631 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.431643 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.431664 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.431680 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:59Z","lastTransitionTime":"2025-12-01T09:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.433734 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.447091 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.459858 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.475098 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:57Z\\\",\\\"message\\\":\\\"2025-12-01T09:32:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1\\\\n2025-12-01T09:32:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1 to /host/opt/cni/bin/\\\\n2025-12-01T09:32:12Z [verbose] multus-daemon started\\\\n2025-12-01T09:32:12Z [verbose] Readiness Indicator file check\\\\n2025-12-01T09:32:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.491459 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.509527 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.531020 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.534626 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.534860 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.534970 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.535082 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.535202 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:59Z","lastTransitionTime":"2025-12-01T09:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.553777 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.638124 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.638175 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.638186 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.638204 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.638215 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:59Z","lastTransitionTime":"2025-12-01T09:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.666980 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.667046 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.667062 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.666985 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:32:59 crc kubenswrapper[4933]: E1201 09:32:59.667208 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:32:59 crc kubenswrapper[4933]: E1201 09:32:59.667386 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:32:59 crc kubenswrapper[4933]: E1201 09:32:59.667467 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:32:59 crc kubenswrapper[4933]: E1201 09:32:59.667636 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.682060 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.697791 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.711612 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.723946 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.734212 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.740556 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.740597 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.740608 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.740624 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.740637 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:59Z","lastTransitionTime":"2025-12-01T09:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.750232 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:57Z\\\",\\\"message\\\":\\\"2025-12-01T09:32:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1\\\\n2025-12-01T09:32:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1 to /host/opt/cni/bin/\\\\n2025-12-01T09:32:12Z [verbose] multus-daemon started\\\\n2025-12-01T09:32:12Z [verbose] Readiness Indicator file check\\\\n2025-12-01T09:32:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.767101 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.778801 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.794274 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.807671 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.819377 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.833786 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.842603 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.842647 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.842659 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.842675 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.842687 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:59Z","lastTransitionTime":"2025-12-01T09:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.851213 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.865679 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.888416 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:44Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1201 09:32:43.504953 6602 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1201 09:32:43.505265 6602 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.505783 6602 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.509050 6602 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:43.509138 6602 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:32:43.509214 6602 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:43.509344 6602 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:32:43.509424 6602 factory.go:656] Stopping watch factory\\\\nI1201 09:32:43.514272 6602 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1201 09:32:43.514299 6602 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1201 09:32:43.514433 6602 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:43.514507 6602 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 09:32:43.514625 6602 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.905358 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.918738 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:32:59Z is after 2025-08-24T17:21:41Z" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.944903 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.944968 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.944987 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.945011 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:32:59 crc kubenswrapper[4933]: I1201 09:32:59.945026 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:32:59Z","lastTransitionTime":"2025-12-01T09:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.048077 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.048123 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.048133 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.048151 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.048161 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:00Z","lastTransitionTime":"2025-12-01T09:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.150953 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.151028 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.151040 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.151064 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.151077 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:00Z","lastTransitionTime":"2025-12-01T09:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.253941 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.253981 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.253995 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.254016 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.254027 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:00Z","lastTransitionTime":"2025-12-01T09:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.356396 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.356427 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.356438 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.356456 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.356468 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:00Z","lastTransitionTime":"2025-12-01T09:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.459562 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.459617 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.459628 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.459645 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.459662 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:00Z","lastTransitionTime":"2025-12-01T09:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.563134 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.565526 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.565541 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.565584 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.565597 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:00Z","lastTransitionTime":"2025-12-01T09:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.668852 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.668922 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.668937 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.668961 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.668977 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:00Z","lastTransitionTime":"2025-12-01T09:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.772365 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.772419 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.772432 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.772451 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.772464 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:00Z","lastTransitionTime":"2025-12-01T09:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.875640 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.875705 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.875716 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.875732 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.875743 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:00Z","lastTransitionTime":"2025-12-01T09:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.978458 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.978515 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.978530 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.978553 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:00 crc kubenswrapper[4933]: I1201 09:33:00.978569 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:00Z","lastTransitionTime":"2025-12-01T09:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.081687 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.081730 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.081740 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.081757 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.081766 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:01Z","lastTransitionTime":"2025-12-01T09:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.184457 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.184524 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.184538 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.184562 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.184576 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:01Z","lastTransitionTime":"2025-12-01T09:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.287758 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.287805 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.287817 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.287834 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.287844 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:01Z","lastTransitionTime":"2025-12-01T09:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.389923 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.389963 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.389982 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.390005 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.390017 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:01Z","lastTransitionTime":"2025-12-01T09:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.493744 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.493837 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.493851 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.493874 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.493890 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:01Z","lastTransitionTime":"2025-12-01T09:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.596409 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.596474 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.596484 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.596504 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.596516 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:01Z","lastTransitionTime":"2025-12-01T09:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.667289 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.667376 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.667445 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:01 crc kubenswrapper[4933]: E1201 09:33:01.667488 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:01 crc kubenswrapper[4933]: E1201 09:33:01.667695 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:01 crc kubenswrapper[4933]: E1201 09:33:01.667735 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.668113 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:01 crc kubenswrapper[4933]: E1201 09:33:01.668222 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.698671 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.698745 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.698757 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.698774 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.698785 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:01Z","lastTransitionTime":"2025-12-01T09:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.802132 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.802195 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.802210 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.802240 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.802259 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:01Z","lastTransitionTime":"2025-12-01T09:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.905540 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.905921 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.906049 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.906140 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:01 crc kubenswrapper[4933]: I1201 09:33:01.906204 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:01Z","lastTransitionTime":"2025-12-01T09:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.009233 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.009600 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.009683 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.009796 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.009882 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.112530 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.112574 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.112584 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.112599 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.112609 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.214896 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.214944 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.214958 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.214976 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.214989 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.318400 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.318453 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.318462 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.318479 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.318489 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.421180 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.421240 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.421251 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.421275 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.421289 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.462521 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.462579 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.462590 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.462610 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.462977 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: E1201 09:33:02.480382 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:02Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.485982 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.486020 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.486031 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.486051 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.486064 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: E1201 09:33:02.501872 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:02Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.506108 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.506166 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.506182 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.506209 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.506229 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: E1201 09:33:02.522448 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:02Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.528049 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.528121 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.528138 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.528157 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.528188 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: E1201 09:33:02.544237 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:02Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.549398 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.549511 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.549526 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.549981 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.550027 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: E1201 09:33:02.570263 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:02Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:02 crc kubenswrapper[4933]: E1201 09:33:02.570477 4933 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.572911 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.572968 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.572982 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.573003 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.573017 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.676483 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.676531 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.676542 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.676559 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.676573 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.779843 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.779899 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.779912 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.779935 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.779949 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.883009 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.883054 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.883064 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.883083 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.883096 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.986887 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.986953 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.986962 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.986985 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:02 crc kubenswrapper[4933]: I1201 09:33:02.987196 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:02Z","lastTransitionTime":"2025-12-01T09:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.090371 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.090431 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.090443 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.090463 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.090474 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:03Z","lastTransitionTime":"2025-12-01T09:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.193827 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.193884 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.193899 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.193922 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.193937 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:03Z","lastTransitionTime":"2025-12-01T09:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.297787 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.297880 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.297925 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.297960 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.297985 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:03Z","lastTransitionTime":"2025-12-01T09:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.400823 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.400863 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.400872 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.400888 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.400899 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:03Z","lastTransitionTime":"2025-12-01T09:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.506766 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.506907 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.506932 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.506961 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.506980 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:03Z","lastTransitionTime":"2025-12-01T09:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.610301 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.610366 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.610375 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.610399 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.610412 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:03Z","lastTransitionTime":"2025-12-01T09:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.666863 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.666942 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.666971 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.666908 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:03 crc kubenswrapper[4933]: E1201 09:33:03.667084 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:03 crc kubenswrapper[4933]: E1201 09:33:03.667281 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:03 crc kubenswrapper[4933]: E1201 09:33:03.667630 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:03 crc kubenswrapper[4933]: E1201 09:33:03.667776 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.713350 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.713395 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.713404 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.713423 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.713433 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:03Z","lastTransitionTime":"2025-12-01T09:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.816539 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.816598 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.816610 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.816630 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.816643 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:03Z","lastTransitionTime":"2025-12-01T09:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.919894 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.919953 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.919964 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.919986 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:03 crc kubenswrapper[4933]: I1201 09:33:03.920010 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:03Z","lastTransitionTime":"2025-12-01T09:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.023409 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.023480 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.023491 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.023516 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.023541 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:04Z","lastTransitionTime":"2025-12-01T09:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.126319 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.126362 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.126372 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.126389 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.126401 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:04Z","lastTransitionTime":"2025-12-01T09:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.229389 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.229449 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.229464 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.229485 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.229498 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:04Z","lastTransitionTime":"2025-12-01T09:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.332994 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.333040 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.333053 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.333074 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.333088 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:04Z","lastTransitionTime":"2025-12-01T09:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.436089 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.436141 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.436154 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.436173 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.436185 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:04Z","lastTransitionTime":"2025-12-01T09:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.539779 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.539869 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.539880 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.539902 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.539919 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:04Z","lastTransitionTime":"2025-12-01T09:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.643419 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.643480 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.643492 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.643510 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.643524 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:04Z","lastTransitionTime":"2025-12-01T09:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.746932 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.746992 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.747004 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.747027 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.747037 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:04Z","lastTransitionTime":"2025-12-01T09:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.850530 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.850593 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.850614 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.850640 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.850659 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:04Z","lastTransitionTime":"2025-12-01T09:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.952948 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.953020 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.953038 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.953061 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:04 crc kubenswrapper[4933]: I1201 09:33:04.953075 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:04Z","lastTransitionTime":"2025-12-01T09:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.056360 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.056426 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.056437 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.056465 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.056478 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:05Z","lastTransitionTime":"2025-12-01T09:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.159465 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.159554 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.159582 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.159615 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.159634 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:05Z","lastTransitionTime":"2025-12-01T09:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.262848 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.262906 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.262924 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.262946 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.262958 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:05Z","lastTransitionTime":"2025-12-01T09:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.366168 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.366235 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.366248 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.366273 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.366364 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:05Z","lastTransitionTime":"2025-12-01T09:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.469403 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.469447 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.469459 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.469478 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.469490 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:05Z","lastTransitionTime":"2025-12-01T09:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.572632 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.572682 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.572694 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.572715 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.572726 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:05Z","lastTransitionTime":"2025-12-01T09:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.667631 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.667720 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.667754 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.667671 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:05 crc kubenswrapper[4933]: E1201 09:33:05.667852 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:05 crc kubenswrapper[4933]: E1201 09:33:05.668008 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:05 crc kubenswrapper[4933]: E1201 09:33:05.668195 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:05 crc kubenswrapper[4933]: E1201 09:33:05.668240 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.674730 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.674777 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.674790 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.674815 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.674833 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:05Z","lastTransitionTime":"2025-12-01T09:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.778099 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.778146 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.778158 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.778182 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.778196 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:05Z","lastTransitionTime":"2025-12-01T09:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.882279 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.882359 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.882373 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.882393 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.882409 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:05Z","lastTransitionTime":"2025-12-01T09:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.985198 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.985253 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.985268 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.985289 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:05 crc kubenswrapper[4933]: I1201 09:33:05.985303 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:05Z","lastTransitionTime":"2025-12-01T09:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.088966 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.089016 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.089028 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.089051 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.089065 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:06Z","lastTransitionTime":"2025-12-01T09:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.192291 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.192365 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.192378 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.192397 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.192409 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:06Z","lastTransitionTime":"2025-12-01T09:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.295150 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.295194 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.295205 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.295224 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.295236 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:06Z","lastTransitionTime":"2025-12-01T09:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.398022 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.398075 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.398085 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.398112 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.398127 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:06Z","lastTransitionTime":"2025-12-01T09:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.500946 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.501014 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.501033 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.501068 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.501081 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:06Z","lastTransitionTime":"2025-12-01T09:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.604152 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.604210 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.604221 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.604245 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.604257 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:06Z","lastTransitionTime":"2025-12-01T09:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.707779 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.707843 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.707878 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.707904 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.707919 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:06Z","lastTransitionTime":"2025-12-01T09:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.810974 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.811043 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.811057 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.811079 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.811093 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:06Z","lastTransitionTime":"2025-12-01T09:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.913757 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.913831 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.913851 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.913883 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:06 crc kubenswrapper[4933]: I1201 09:33:06.913904 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:06Z","lastTransitionTime":"2025-12-01T09:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.017096 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.017159 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.017183 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.017205 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.017220 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:07Z","lastTransitionTime":"2025-12-01T09:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.120142 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.120195 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.120207 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.120227 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.120239 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:07Z","lastTransitionTime":"2025-12-01T09:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.222926 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.222986 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.223001 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.223035 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.223050 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:07Z","lastTransitionTime":"2025-12-01T09:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.325586 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.325650 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.325663 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.325703 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.325717 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:07Z","lastTransitionTime":"2025-12-01T09:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.428464 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.428515 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.428525 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.428548 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.428567 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:07Z","lastTransitionTime":"2025-12-01T09:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.531634 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.531678 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.531690 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.531709 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.531726 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:07Z","lastTransitionTime":"2025-12-01T09:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.635354 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.635399 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.635408 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.635425 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.635434 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:07Z","lastTransitionTime":"2025-12-01T09:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.666826 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.666867 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.667062 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:07 crc kubenswrapper[4933]: E1201 09:33:07.667057 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.667239 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:07 crc kubenswrapper[4933]: E1201 09:33:07.667281 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:07 crc kubenswrapper[4933]: E1201 09:33:07.667401 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:07 crc kubenswrapper[4933]: E1201 09:33:07.667235 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.738358 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.738431 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.738446 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.738464 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.738494 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:07Z","lastTransitionTime":"2025-12-01T09:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.842552 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.842640 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.842660 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.842692 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.842720 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:07Z","lastTransitionTime":"2025-12-01T09:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.945830 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.945903 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.945973 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.945998 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:07 crc kubenswrapper[4933]: I1201 09:33:07.946011 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:07Z","lastTransitionTime":"2025-12-01T09:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.050402 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.050460 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.050471 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.050497 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.050511 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:08Z","lastTransitionTime":"2025-12-01T09:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.153614 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.153666 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.153679 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.153700 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.153717 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:08Z","lastTransitionTime":"2025-12-01T09:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.257147 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.257212 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.257226 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.257252 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.257269 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:08Z","lastTransitionTime":"2025-12-01T09:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.361007 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.361059 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.361071 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.361090 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.361104 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:08Z","lastTransitionTime":"2025-12-01T09:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.464900 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.464961 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.464972 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.464994 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.465004 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:08Z","lastTransitionTime":"2025-12-01T09:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.568518 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.568564 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.568574 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.568591 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.568601 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:08Z","lastTransitionTime":"2025-12-01T09:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.667607 4933 scope.go:117] "RemoveContainer" containerID="3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.671171 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.671231 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.671246 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.671268 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.671283 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:08Z","lastTransitionTime":"2025-12-01T09:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.773850 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.773893 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.773905 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.773920 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.773932 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:08Z","lastTransitionTime":"2025-12-01T09:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.877820 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.877879 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.877890 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.877909 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.877921 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:08Z","lastTransitionTime":"2025-12-01T09:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.981154 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.981213 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.981228 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.981250 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:08 crc kubenswrapper[4933]: I1201 09:33:08.981268 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:08Z","lastTransitionTime":"2025-12-01T09:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.084459 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.084515 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.084527 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.084547 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.084558 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:09Z","lastTransitionTime":"2025-12-01T09:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.187572 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.187666 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.187679 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.187699 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.187720 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:09Z","lastTransitionTime":"2025-12-01T09:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.290711 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.290770 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.290781 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.290803 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.290816 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:09Z","lastTransitionTime":"2025-12-01T09:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.394359 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.394409 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.394420 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.394436 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.394446 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:09Z","lastTransitionTime":"2025-12-01T09:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.498608 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.498676 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.498692 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.498714 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.498728 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:09Z","lastTransitionTime":"2025-12-01T09:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.602129 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.602177 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.602189 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.602214 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.602227 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:09Z","lastTransitionTime":"2025-12-01T09:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.666642 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.667247 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.667386 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.667385 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:09 crc kubenswrapper[4933]: E1201 09:33:09.667541 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:09 crc kubenswrapper[4933]: E1201 09:33:09.667874 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:09 crc kubenswrapper[4933]: E1201 09:33:09.667995 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:09 crc kubenswrapper[4933]: E1201 09:33:09.668052 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.684982 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.686142 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.705931 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.705996 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.706012 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.706040 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.706057 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:09Z","lastTransitionTime":"2025-12-01T09:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.711099 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:44Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1201 09:32:43.504953 6602 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1201 09:32:43.505265 6602 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.505783 6602 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.509050 6602 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:43.509138 6602 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:32:43.509214 6602 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:43.509344 6602 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:32:43.509424 6602 factory.go:656] Stopping watch factory\\\\nI1201 09:32:43.514272 6602 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1201 09:32:43.514299 6602 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1201 09:32:43.514433 6602 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:43.514507 6602 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 09:32:43.514625 6602 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.727042 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.740662 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.756330 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.772973 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.786450 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.802468 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.808775 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.808835 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.808849 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.808870 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.808884 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:09Z","lastTransitionTime":"2025-12-01T09:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.820076 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.835898 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.851165 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:57Z\\\",\\\"message\\\":\\\"2025-12-01T09:32:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1\\\\n2025-12-01T09:32:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1 to /host/opt/cni/bin/\\\\n2025-12-01T09:32:12Z [verbose] multus-daemon started\\\\n2025-12-01T09:32:12Z [verbose] Readiness Indicator file check\\\\n2025-12-01T09:32:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.869678 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.888810 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.904054 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.912876 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.912938 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.912952 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.912971 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.912985 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:09Z","lastTransitionTime":"2025-12-01T09:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.922911 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.941662 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:09 crc kubenswrapper[4933]: I1201 09:33:09.960702 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:09Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.016140 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.016199 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.016210 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.016230 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.016240 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:10Z","lastTransitionTime":"2025-12-01T09:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.119375 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.119424 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.119435 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.119455 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.119469 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:10Z","lastTransitionTime":"2025-12-01T09:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.222590 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.222644 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.222655 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.222676 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.222689 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:10Z","lastTransitionTime":"2025-12-01T09:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.324806 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.324885 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.324900 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.324924 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.324938 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:10Z","lastTransitionTime":"2025-12-01T09:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.327491 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/2.log" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.330427 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerStarted","Data":"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d"} Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.347539 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.383976 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:44Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1201 09:32:43.504953 6602 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1201 09:32:43.505265 6602 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.505783 6602 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.509050 6602 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:43.509138 6602 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:32:43.509214 6602 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:43.509344 6602 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:32:43.509424 6602 factory.go:656] Stopping watch factory\\\\nI1201 09:32:43.514272 6602 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1201 09:32:43.514299 6602 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1201 09:32:43.514433 6602 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:43.514507 6602 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 09:32:43.514625 6602 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.407272 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.428802 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.428862 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.428878 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.428901 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.428919 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:10Z","lastTransitionTime":"2025-12-01T09:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.434816 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.457481 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.475635 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.490747 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.512754 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf3e1bb-4324-427c-a121-8d03fbbbbf2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2735f4a06b7a5b90a9b73750be04fb2598144d207bc7fcff5487142b5ce7845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f985c5d3848b8e8d2b0ad0995a2e5e65ebff87952226a2c74e07f62dd62f41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9539eb1cbec2f844ae9cccd4ac924105f6a11db5e1e03436eb369f3683e3f5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3815386b2976c2ce2dcae87a7aae2ddcfa0a53205ef1d81168c015a58b2385c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee912d789a5b6c5d2c9c7d8574b1975096969f054f46154f669ded20b6f19bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb3dd4183e3b84376101c7a0efbac3df96d9693934a5778bca7ff08e7554b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb3dd4183e3b84376101c7a0efbac3df96d9693934a5778bca7ff08e7554b42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e070bfc462a9f38c7a0ace6b75c51d491b514615d85ca57ca9a5485a653c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e070bfc462a9f38c7a0ace6b75c51d491b514615d85ca57ca9a5485a653c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c94a4d2d4128f74a0a0ecb00b4af1ed2835760620593fee78ca33f43a58d8623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c94a4d2d4128f74a0a0ecb00b4af1ed2835760620593fee78ca33f43a58d8623\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.532522 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.532585 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.532647 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.532661 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.532689 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.532704 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:10Z","lastTransitionTime":"2025-12-01T09:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.546359 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.563053 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:57Z\\\",\\\"message\\\":\\\"2025-12-01T09:32:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1\\\\n2025-12-01T09:32:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1 to /host/opt/cni/bin/\\\\n2025-12-01T09:32:12Z [verbose] multus-daemon started\\\\n2025-12-01T09:32:12Z [verbose] Readiness Indicator file check\\\\n2025-12-01T09:32:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.583489 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.599599 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.618062 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.632385 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.635794 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.635831 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.635842 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.635859 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.635871 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:10Z","lastTransitionTime":"2025-12-01T09:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.647055 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.662779 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.676726 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:10Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.739570 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.739617 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.739629 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.739658 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.739670 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:10Z","lastTransitionTime":"2025-12-01T09:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.843440 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.843503 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.843520 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.843545 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.843564 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:10Z","lastTransitionTime":"2025-12-01T09:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.947622 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.947680 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.947693 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.947715 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:10 crc kubenswrapper[4933]: I1201 09:33:10.947732 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:10Z","lastTransitionTime":"2025-12-01T09:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.051015 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.051067 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.051077 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.051095 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.051107 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:11Z","lastTransitionTime":"2025-12-01T09:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.153991 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.154576 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.154818 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.155029 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.155214 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:11Z","lastTransitionTime":"2025-12-01T09:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.258004 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.258062 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.258075 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.258097 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.258115 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:11Z","lastTransitionTime":"2025-12-01T09:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.337347 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/3.log" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.337933 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/2.log" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.342589 4933 generic.go:334] "Generic (PLEG): container finished" podID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerID="74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d" exitCode=1 Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.342651 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d"} Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.342718 4933 scope.go:117] "RemoveContainer" containerID="3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.343702 4933 scope.go:117] "RemoveContainer" containerID="74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d" Dec 01 09:33:11 crc kubenswrapper[4933]: E1201 09:33:11.343928 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.361291 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.361361 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.361373 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.361397 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.361412 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:11Z","lastTransitionTime":"2025-12-01T09:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.362490 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.379094 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.392018 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.406425 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.417234 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.431277 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:57Z\\\",\\\"message\\\":\\\"2025-12-01T09:32:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1\\\\n2025-12-01T09:32:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1 to /host/opt/cni/bin/\\\\n2025-12-01T09:32:12Z [verbose] multus-daemon started\\\\n2025-12-01T09:32:12Z [verbose] Readiness Indicator file check\\\\n2025-12-01T09:32:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.449160 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.463763 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.463947 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.463980 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.463989 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.464007 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.464018 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:11Z","lastTransitionTime":"2025-12-01T09:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.487107 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf3e1bb-4324-427c-a121-8d03fbbbbf2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2735f4a06b7a5b90a9b73750be04fb2598144d207bc7fcff5487142b5ce7845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f985c5d3848b8e8d2b0ad0995a2e5e65ebff87952226a2c74e07f62dd62f41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9539eb1cbec2f844ae9cccd4ac924105f6a11db5e1e03436eb369f3683e3f5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3815386b2976c2ce2dcae87a7aae2ddcfa0a53205ef1d81168c015a58b2385c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee912d789a5b6c5d2c9c7d8574b1975096969f054f46154f669ded20b6f19bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb3dd4183e3b84376101c7a0efbac3df96d9693934a5778bca7ff08e7554b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb3dd4183e3b84376101c7a0efbac3df96d9693934a5778bca7ff08e7554b42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e070bfc462a9f38c7a0ace6b75c51d491b514615d85ca57ca9a5485a653c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e070bfc462a9f38c7a0ace6b75c51d491b514615d85ca57ca9a5485a653c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c94a4d2d4128f74a0a0ecb00b4af1ed2835760620593fee78ca33f43a58d8623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c94a4d2d4128f74a0a0ecb00b4af1ed2835760620593fee78ca33f43a58d8623\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.502939 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.520025 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.538285 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.553605 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.566690 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.566757 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.566773 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.566800 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.566815 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:11Z","lastTransitionTime":"2025-12-01T09:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.570856 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.593394 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:44Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1201 09:32:43.504953 6602 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1201 09:32:43.505265 6602 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.505783 6602 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.509050 6602 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:43.509138 6602 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:32:43.509214 6602 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:43.509344 6602 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:32:43.509424 6602 factory.go:656] Stopping watch factory\\\\nI1201 09:32:43.514272 6602 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1201 09:32:43.514299 6602 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1201 09:32:43.514433 6602 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:43.514507 6602 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 09:32:43.514625 6602 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:33:10Z\\\",\\\"message\\\":\\\"1201 09:33:10.580925 6959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 09:33:10.580934 6959 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 09:33:10.580953 6959 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 09:33:10.580963 6959 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 09:33:10.580958 6959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 09:33:10.580981 6959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 09:33:10.580992 6959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:33:10.581004 6959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 09:33:10.581026 6959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:33:10.581034 6959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 09:33:10.581064 6959 factory.go:656] Stopping watch factory\\\\nI1201 09:33:10.581080 6959 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:33:10.581080 6959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:33:10.581078 6959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:33:10.581094 6959 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 09:33:10.581098 6959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.613021 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.628769 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.644577 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:11Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.667224 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.667228 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:11 crc kubenswrapper[4933]: E1201 09:33:11.667389 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.667191 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:11 crc kubenswrapper[4933]: E1201 09:33:11.667637 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.667786 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:11 crc kubenswrapper[4933]: E1201 09:33:11.667886 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:11 crc kubenswrapper[4933]: E1201 09:33:11.667916 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.670281 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.670365 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.670378 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.670399 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.670413 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:11Z","lastTransitionTime":"2025-12-01T09:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.773520 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.773575 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.773586 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.773603 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.773615 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:11Z","lastTransitionTime":"2025-12-01T09:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.876467 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.876518 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.876529 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.876548 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.876562 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:11Z","lastTransitionTime":"2025-12-01T09:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.979818 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.979866 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.979875 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.979892 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:11 crc kubenswrapper[4933]: I1201 09:33:11.979904 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:11Z","lastTransitionTime":"2025-12-01T09:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.083402 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.083863 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.083960 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.084069 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.084172 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.187539 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.187592 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.187602 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.187628 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.187639 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.290671 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.290725 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.290734 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.290756 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.290772 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.348759 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/3.log" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.394747 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.394797 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.394811 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.394834 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.394849 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.498661 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.498722 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.498737 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.498758 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.498772 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.602332 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.602399 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.602414 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.602439 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.602455 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.659470 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.659544 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.659558 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.659582 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.659597 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: E1201 09:33:12.674422 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.679692 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.679735 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.679746 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.679765 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.679778 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: E1201 09:33:12.692426 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.696579 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.696632 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.696645 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.696662 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.696674 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: E1201 09:33:12.711761 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.717253 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.717324 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.717338 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.717359 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.717373 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: E1201 09:33:12.730778 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.734992 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.735047 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.735063 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.735084 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.735098 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: E1201 09:33:12.748923 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:12Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:12 crc kubenswrapper[4933]: E1201 09:33:12.749137 4933 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.751769 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.751824 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.751837 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.751860 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.751879 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.854967 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.855076 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.855090 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.855111 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.855124 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.958775 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.958835 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.958846 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.958877 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:12 crc kubenswrapper[4933]: I1201 09:33:12.958890 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:12Z","lastTransitionTime":"2025-12-01T09:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.061564 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.061622 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.061636 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.061815 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.061838 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:13Z","lastTransitionTime":"2025-12-01T09:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.166008 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.166078 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.166089 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.166110 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.166124 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:13Z","lastTransitionTime":"2025-12-01T09:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.268770 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.268836 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.268861 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.268882 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.269299 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:13Z","lastTransitionTime":"2025-12-01T09:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.278226 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.278413 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.278442 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.278509 4933 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.278507 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:17.278456631 +0000 UTC m=+147.920180246 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.278549 4933 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.278577 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:34:17.278557434 +0000 UTC m=+147.920281049 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.278596 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 09:34:17.278585204 +0000 UTC m=+147.920308819 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.278619 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.278665 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.278755 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.278770 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.278782 4933 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.278810 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 09:34:17.27880272 +0000 UTC m=+147.920526335 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.278904 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.278926 4933 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.278945 4933 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.279002 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 09:34:17.278984605 +0000 UTC m=+147.920708270 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.373353 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.373421 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.373435 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.373456 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.373470 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:13Z","lastTransitionTime":"2025-12-01T09:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.476981 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.477062 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.477079 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.477109 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.477126 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:13Z","lastTransitionTime":"2025-12-01T09:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.581219 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.581279 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.581295 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.581334 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.581347 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:13Z","lastTransitionTime":"2025-12-01T09:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.666829 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.667107 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.667231 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.667269 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.667323 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.667477 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.667519 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:13 crc kubenswrapper[4933]: E1201 09:33:13.668229 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.684873 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.684905 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.684918 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.684934 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.684948 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:13Z","lastTransitionTime":"2025-12-01T09:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.788322 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.788376 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.788390 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.788413 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.788430 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:13Z","lastTransitionTime":"2025-12-01T09:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.891179 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.891234 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.891247 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.891270 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.891285 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:13Z","lastTransitionTime":"2025-12-01T09:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.994207 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.994264 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.994279 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.994329 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:13 crc kubenswrapper[4933]: I1201 09:33:13.994347 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:13Z","lastTransitionTime":"2025-12-01T09:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.097709 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.097766 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.097781 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.097805 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.097821 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:14Z","lastTransitionTime":"2025-12-01T09:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.201379 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.201429 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.201440 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.201457 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.201470 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:14Z","lastTransitionTime":"2025-12-01T09:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.304847 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.304908 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.304923 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.304947 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.304963 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:14Z","lastTransitionTime":"2025-12-01T09:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.407635 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.407717 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.407732 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.407755 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.407769 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:14Z","lastTransitionTime":"2025-12-01T09:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.510284 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.510353 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.510365 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.510388 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.510403 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:14Z","lastTransitionTime":"2025-12-01T09:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.613416 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.613462 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.613471 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.613483 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.613491 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:14Z","lastTransitionTime":"2025-12-01T09:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.716893 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.716964 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.716979 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.717004 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.717024 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:14Z","lastTransitionTime":"2025-12-01T09:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.820252 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.820325 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.820334 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.820353 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.820364 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:14Z","lastTransitionTime":"2025-12-01T09:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.923703 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.923759 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.923779 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.923802 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:14 crc kubenswrapper[4933]: I1201 09:33:14.923814 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:14Z","lastTransitionTime":"2025-12-01T09:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.027399 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.027457 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.027469 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.027501 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.027518 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:15Z","lastTransitionTime":"2025-12-01T09:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.130720 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.130770 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.130782 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.130803 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.130817 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:15Z","lastTransitionTime":"2025-12-01T09:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.234074 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.234146 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.234158 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.234176 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.234190 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:15Z","lastTransitionTime":"2025-12-01T09:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.337007 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.337083 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.337101 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.337124 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.337138 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:15Z","lastTransitionTime":"2025-12-01T09:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.440440 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.440488 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.440499 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.440521 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.440540 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:15Z","lastTransitionTime":"2025-12-01T09:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.545051 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.545589 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.545670 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.545746 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.545859 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:15Z","lastTransitionTime":"2025-12-01T09:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.649387 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.649436 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.649452 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.649473 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.649486 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:15Z","lastTransitionTime":"2025-12-01T09:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.667185 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.667245 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.667205 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.667185 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:15 crc kubenswrapper[4933]: E1201 09:33:15.667404 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:15 crc kubenswrapper[4933]: E1201 09:33:15.667482 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:15 crc kubenswrapper[4933]: E1201 09:33:15.667709 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:15 crc kubenswrapper[4933]: E1201 09:33:15.667797 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.753541 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.753610 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.753624 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.753643 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.753656 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:15Z","lastTransitionTime":"2025-12-01T09:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.857828 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.857897 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.857908 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.857930 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.857943 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:15Z","lastTransitionTime":"2025-12-01T09:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.960534 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.960581 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.960596 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.960619 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:15 crc kubenswrapper[4933]: I1201 09:33:15.960636 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:15Z","lastTransitionTime":"2025-12-01T09:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.063962 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.064056 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.064074 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.064101 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.064120 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:16Z","lastTransitionTime":"2025-12-01T09:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.167516 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.167586 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.167600 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.167624 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.167639 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:16Z","lastTransitionTime":"2025-12-01T09:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.269696 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.269786 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.269798 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.269815 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.269826 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:16Z","lastTransitionTime":"2025-12-01T09:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.372093 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.372835 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.372888 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.372976 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.373007 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:16Z","lastTransitionTime":"2025-12-01T09:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.476751 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.476823 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.476835 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.476861 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.476875 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:16Z","lastTransitionTime":"2025-12-01T09:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.580206 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.580281 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.580297 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.580343 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.580359 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:16Z","lastTransitionTime":"2025-12-01T09:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.683475 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.683901 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.683916 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.683936 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.683949 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:16Z","lastTransitionTime":"2025-12-01T09:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.786661 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.786715 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.786727 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.786746 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.786758 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:16Z","lastTransitionTime":"2025-12-01T09:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.890610 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.890666 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.890678 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.890701 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.890717 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:16Z","lastTransitionTime":"2025-12-01T09:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.994758 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.994815 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.994826 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.994851 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:16 crc kubenswrapper[4933]: I1201 09:33:16.994864 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:16Z","lastTransitionTime":"2025-12-01T09:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.097620 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.097688 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.097701 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.097725 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.097740 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:17Z","lastTransitionTime":"2025-12-01T09:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.201126 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.201193 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.201206 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.201231 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.201249 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:17Z","lastTransitionTime":"2025-12-01T09:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.304679 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.304741 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.304751 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.304771 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.304785 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:17Z","lastTransitionTime":"2025-12-01T09:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.408396 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.408483 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.408494 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.408514 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.408525 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:17Z","lastTransitionTime":"2025-12-01T09:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.511583 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.511633 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.511642 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.511660 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.511673 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:17Z","lastTransitionTime":"2025-12-01T09:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.614061 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.614112 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.614124 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.614142 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.614153 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:17Z","lastTransitionTime":"2025-12-01T09:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.666732 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.666846 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.666775 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:17 crc kubenswrapper[4933]: E1201 09:33:17.666951 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.666901 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:17 crc kubenswrapper[4933]: E1201 09:33:17.667354 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:17 crc kubenswrapper[4933]: E1201 09:33:17.667300 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:17 crc kubenswrapper[4933]: E1201 09:33:17.667437 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.717023 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.717135 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.717147 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.717170 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.717193 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:17Z","lastTransitionTime":"2025-12-01T09:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.820157 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.820217 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.820228 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.820250 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.820264 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:17Z","lastTransitionTime":"2025-12-01T09:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.924386 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.924459 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.924471 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.924490 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:17 crc kubenswrapper[4933]: I1201 09:33:17.924502 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:17Z","lastTransitionTime":"2025-12-01T09:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.028073 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.028135 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.028149 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.028165 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.028179 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:18Z","lastTransitionTime":"2025-12-01T09:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.131562 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.131637 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.131654 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.131678 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.131691 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:18Z","lastTransitionTime":"2025-12-01T09:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.234364 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.234416 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.234428 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.234450 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.234464 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:18Z","lastTransitionTime":"2025-12-01T09:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.338145 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.338208 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.338220 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.338242 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.338254 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:18Z","lastTransitionTime":"2025-12-01T09:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.441519 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.441599 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.441613 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.441637 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.441650 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:18Z","lastTransitionTime":"2025-12-01T09:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.544338 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.544408 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.544422 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.544450 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.544467 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:18Z","lastTransitionTime":"2025-12-01T09:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.647247 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.647316 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.647336 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.647355 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.647365 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:18Z","lastTransitionTime":"2025-12-01T09:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.688768 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.750972 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.751052 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.751067 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.751093 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.751111 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:18Z","lastTransitionTime":"2025-12-01T09:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.853508 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.853757 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.853771 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.853793 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.853808 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:18Z","lastTransitionTime":"2025-12-01T09:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.957293 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.957374 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.957389 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.957417 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:18 crc kubenswrapper[4933]: I1201 09:33:18.957432 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:18Z","lastTransitionTime":"2025-12-01T09:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.059910 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.059954 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.059964 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.059981 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.059994 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:19Z","lastTransitionTime":"2025-12-01T09:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.164600 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.164659 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.164676 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.164711 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.164740 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:19Z","lastTransitionTime":"2025-12-01T09:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.267886 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.267980 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.267994 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.268018 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.268033 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:19Z","lastTransitionTime":"2025-12-01T09:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.370698 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.370757 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.370767 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.370785 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.370797 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:19Z","lastTransitionTime":"2025-12-01T09:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.473223 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.473270 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.473280 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.473295 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.473321 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:19Z","lastTransitionTime":"2025-12-01T09:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.576190 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.576252 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.576263 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.576285 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.576324 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:19Z","lastTransitionTime":"2025-12-01T09:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.667040 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:19 crc kubenswrapper[4933]: E1201 09:33:19.667228 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.667358 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.667396 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.667529 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:19 crc kubenswrapper[4933]: E1201 09:33:19.667598 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:19 crc kubenswrapper[4933]: E1201 09:33:19.667704 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:19 crc kubenswrapper[4933]: E1201 09:33:19.667763 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.679068 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.679120 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.679130 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.679147 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.679160 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:19Z","lastTransitionTime":"2025-12-01T09:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.687640 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.704569 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.720762 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.736097 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.750822 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.766196 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.781567 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.781617 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.781632 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.781659 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.781680 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:19Z","lastTransitionTime":"2025-12-01T09:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.788523 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3871eff8e2af46c5771ecd2db9ba030c26b8ef7cf8481ded1eaf32e97ed733a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:44Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1201 09:32:43.504953 6602 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1201 09:32:43.505265 6602 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.505783 6602 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 09:32:43.509050 6602 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:32:43.509138 6602 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:32:43.509214 6602 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:32:43.509344 6602 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:32:43.509424 6602 factory.go:656] Stopping watch factory\\\\nI1201 09:32:43.514272 6602 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1201 09:32:43.514299 6602 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1201 09:32:43.514433 6602 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:32:43.514507 6602 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 09:32:43.514625 6602 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:33:10Z\\\",\\\"message\\\":\\\"1201 09:33:10.580925 6959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 09:33:10.580934 6959 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 09:33:10.580953 6959 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 09:33:10.580963 6959 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 09:33:10.580958 6959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 09:33:10.580981 6959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 09:33:10.580992 6959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:33:10.581004 6959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 09:33:10.581026 6959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:33:10.581034 6959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 09:33:10.581064 6959 factory.go:656] Stopping watch factory\\\\nI1201 09:33:10.581080 6959 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:33:10.581080 6959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:33:10.581078 6959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:33:10.581094 6959 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 09:33:10.581098 6959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.801791 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.813380 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.826838 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7afe09c3-fd9a-47e5-aaf2-6d017a2f0f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4034d688e26719a662808aa5c0756a8cff2b474424f6aff2987cbbf181f9e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b86f15566c5afde426670165750e324859e27846f38fa96071c4e81c1851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b86f15566c5afde426670165750e324859e27846f38fa96071c4e81c1851af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.841856 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.859430 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.873510 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.884633 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.884674 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.884686 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.884704 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.884715 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:19Z","lastTransitionTime":"2025-12-01T09:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.894361 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf3e1bb-4324-427c-a121-8d03fbbbbf2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2735f4a06b7a5b90a9b73750be04fb2598144d207bc7fcff5487142b5ce7845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f985c5d3848b8e8d2b0ad0995a2e5e65ebff87952226a2c74e07f62dd62f41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9539eb1cbec2f844ae9cccd4ac924105f6a11db5e1e03436eb369f3683e3f5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3815386b2976c2ce2dcae87a7aae2ddcfa0a53205ef1d81168c015a58b2385c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee912d789a5b6c5d2c9c7d8574b1975096969f054f46154f669ded20b6f19bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb3dd4183e3b84376101c7a0efbac3df96d9693934a5778bca7ff08e7554b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb3dd4183e3b84376101c7a0efbac3df96d9693934a5778bca7ff08e7554b42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e070bfc462a9f38c7a0ace6b75c51d491b514615d85ca57ca9a5485a653c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e070bfc462a9f38c7a0ace6b75c51d491b514615d85ca57ca9a5485a653c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c94a4d2d4128f74a0a0ecb00b4af1ed2835760620593fee78ca33f43a58d8623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c94a4d2d4128f74a0a0ecb00b4af1ed2835760620593fee78ca33f43a58d8623\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.910614 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.924413 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.941943 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:57Z\\\",\\\"message\\\":\\\"2025-12-01T09:32:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1\\\\n2025-12-01T09:32:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1 to /host/opt/cni/bin/\\\\n2025-12-01T09:32:12Z [verbose] multus-daemon started\\\\n2025-12-01T09:32:12Z [verbose] Readiness Indicator file check\\\\n2025-12-01T09:32:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.960167 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.973654 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:19Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.987757 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.987798 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.987809 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.987826 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:19 crc kubenswrapper[4933]: I1201 09:33:19.987838 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:19Z","lastTransitionTime":"2025-12-01T09:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.090567 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.090622 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.090635 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.090652 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.090665 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:20Z","lastTransitionTime":"2025-12-01T09:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.193855 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.193942 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.193960 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.193986 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.194002 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:20Z","lastTransitionTime":"2025-12-01T09:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.297552 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.297651 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.297665 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.297688 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.297705 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:20Z","lastTransitionTime":"2025-12-01T09:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.400708 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.400774 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.400782 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.400800 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.400809 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:20Z","lastTransitionTime":"2025-12-01T09:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.503338 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.503392 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.503409 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.503428 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.503441 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:20Z","lastTransitionTime":"2025-12-01T09:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.606711 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.606783 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.606796 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.606821 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.606835 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:20Z","lastTransitionTime":"2025-12-01T09:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.710182 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.710236 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.710246 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.710269 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.710283 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:20Z","lastTransitionTime":"2025-12-01T09:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.814002 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.814059 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.814073 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.814096 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.814112 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:20Z","lastTransitionTime":"2025-12-01T09:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.917384 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.917475 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.917490 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.917508 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:20 crc kubenswrapper[4933]: I1201 09:33:20.917568 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:20Z","lastTransitionTime":"2025-12-01T09:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.020754 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.020815 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.020826 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.020846 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.020857 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:21Z","lastTransitionTime":"2025-12-01T09:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.124586 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.124663 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.124675 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.124701 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.124716 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:21Z","lastTransitionTime":"2025-12-01T09:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.227910 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.227969 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.227981 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.228009 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.228026 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:21Z","lastTransitionTime":"2025-12-01T09:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.331667 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.332526 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.332583 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.332613 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.332638 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:21Z","lastTransitionTime":"2025-12-01T09:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.436249 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.436295 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.436330 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.436345 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.436355 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:21Z","lastTransitionTime":"2025-12-01T09:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.539937 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.539992 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.540003 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.540024 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.540039 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:21Z","lastTransitionTime":"2025-12-01T09:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.643279 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.643374 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.643389 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.643413 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.643430 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:21Z","lastTransitionTime":"2025-12-01T09:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.666972 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.667061 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.667010 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.667003 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:21 crc kubenswrapper[4933]: E1201 09:33:21.667153 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:21 crc kubenswrapper[4933]: E1201 09:33:21.667262 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:21 crc kubenswrapper[4933]: E1201 09:33:21.667392 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:21 crc kubenswrapper[4933]: E1201 09:33:21.667508 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.746737 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.746867 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.746885 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.746909 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.746925 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:21Z","lastTransitionTime":"2025-12-01T09:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.849875 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.849926 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.849935 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.849952 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.849966 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:21Z","lastTransitionTime":"2025-12-01T09:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.952941 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.952998 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.953008 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.953025 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:21 crc kubenswrapper[4933]: I1201 09:33:21.953036 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:21Z","lastTransitionTime":"2025-12-01T09:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.055128 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.055186 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.055196 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.055210 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.055222 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.158732 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.158785 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.158801 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.158836 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.158854 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.262613 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.262684 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.262698 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.262746 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.262814 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.366085 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.366147 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.366160 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.366183 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.366196 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.472602 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.472668 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.472686 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.472736 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.472754 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.576427 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.576466 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.576475 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.576489 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.576499 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.667650 4933 scope.go:117] "RemoveContainer" containerID="74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d" Dec 01 09:33:22 crc kubenswrapper[4933]: E1201 09:33:22.667863 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.679277 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.679372 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.679388 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.679415 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.679432 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.682413 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.697163 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:57Z\\\",\\\"message\\\":\\\"2025-12-01T09:32:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1\\\\n2025-12-01T09:32:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1 to /host/opt/cni/bin/\\\\n2025-12-01T09:32:12Z [verbose] multus-daemon started\\\\n2025-12-01T09:32:12Z [verbose] Readiness Indicator file check\\\\n2025-12-01T09:32:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.714855 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.726336 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.744195 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf3e1bb-4324-427c-a121-8d03fbbbbf2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2735f4a06b7a5b90a9b73750be04fb2598144d207bc7fcff5487142b5ce7845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f985c5d3848b8e8d2b0ad0995a2e5e65ebff87952226a2c74e07f62dd62f41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9539eb1cbec2f844ae9cccd4ac924105f6a11db5e1e03436eb369f3683e3f5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3815386b2976c2ce2dcae87a7aae2ddcfa0a53205ef1d81168c015a58b2385c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee912d789a5b6c5d2c9c7d8574b1975096969f054f46154f669ded20b6f19bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb3dd4183e3b84376101c7a0efbac3df96d9693934a5778bca7ff08e7554b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb3dd4183e3b84376101c7a0efbac3df96d9693934a5778bca7ff08e7554b42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e070bfc462a9f38c7a0ace6b75c51d491b514615d85ca57ca9a5485a653c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e070bfc462a9f38c7a0ace6b75c51d491b514615d85ca57ca9a5485a653c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c94a4d2d4128f74a0a0ecb00b4af1ed2835760620593fee78ca33f43a58d8623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c94a4d2d4128f74a0a0ecb00b4af1ed2835760620593fee78ca33f43a58d8623\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.756952 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.773868 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.781922 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.781965 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.781974 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.781991 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.782006 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.788797 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.803360 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.819094 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.834696 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.848988 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.862495 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.877246 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.884604 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.884949 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.885094 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.885201 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.885296 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.888994 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.889199 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.889326 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.889418 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.889510 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.900486 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:33:10Z\\\",\\\"message\\\":\\\"1201 09:33:10.580925 6959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 09:33:10.580934 6959 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 09:33:10.580953 6959 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 09:33:10.580963 6959 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 09:33:10.580958 6959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 09:33:10.580981 6959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 09:33:10.580992 6959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:33:10.581004 6959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 09:33:10.581026 6959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:33:10.581034 6959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 09:33:10.581064 6959 factory.go:656] Stopping watch factory\\\\nI1201 09:33:10.581080 6959 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:33:10.581080 6959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:33:10.581078 6959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:33:10.581094 6959 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 09:33:10.581098 6959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:33:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: E1201 09:33:22.903981 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.907950 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.907995 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.908009 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.908033 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.908046 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.916297 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: E1201 09:33:22.921894 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.925825 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.925870 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.925883 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.925903 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.925918 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.929285 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: E1201 09:33:22.939870 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.942211 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7afe09c3-fd9a-47e5-aaf2-6d017a2f0f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4034d688e26719a662808aa5c0756a8cff2b474424f6aff2987cbbf181f9e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b86f15566c5afde426670165750e324859e27846f38fa96071c4e81c1851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b86f15566c5afde426670165750e324859e27846f38fa96071c4e81c1851af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.945287 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.945409 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.945423 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.945447 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.945460 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.957960 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: E1201 09:33:22.959114 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.966469 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.966527 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.966543 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.966565 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.966588 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:22 crc kubenswrapper[4933]: E1201 09:33:22.981735 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:22Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:22 crc kubenswrapper[4933]: E1201 09:33:22.981894 4933 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.989341 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.989448 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.989465 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.989488 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:22 crc kubenswrapper[4933]: I1201 09:33:22.989505 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:22Z","lastTransitionTime":"2025-12-01T09:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.093460 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.093510 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.093521 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.093539 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.093552 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:23Z","lastTransitionTime":"2025-12-01T09:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.196040 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.196104 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.196118 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.196142 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.196157 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:23Z","lastTransitionTime":"2025-12-01T09:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.298678 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.298733 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.298746 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.298769 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.298782 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:23Z","lastTransitionTime":"2025-12-01T09:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.402003 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.402071 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.402085 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.402109 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.402123 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:23Z","lastTransitionTime":"2025-12-01T09:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.505793 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.505850 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.505866 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.505887 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.505902 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:23Z","lastTransitionTime":"2025-12-01T09:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.608404 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.608945 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.609046 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.609134 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.609200 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:23Z","lastTransitionTime":"2025-12-01T09:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.667040 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.667128 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.667187 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:23 crc kubenswrapper[4933]: E1201 09:33:23.667188 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:23 crc kubenswrapper[4933]: E1201 09:33:23.667279 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:23 crc kubenswrapper[4933]: E1201 09:33:23.667460 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.667067 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:23 crc kubenswrapper[4933]: E1201 09:33:23.667932 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.712605 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.712679 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.712690 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.712711 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.712724 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:23Z","lastTransitionTime":"2025-12-01T09:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.816368 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.816425 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.816437 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.816457 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.816471 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:23Z","lastTransitionTime":"2025-12-01T09:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.925584 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.925637 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.925648 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.925663 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:23 crc kubenswrapper[4933]: I1201 09:33:23.925676 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:23Z","lastTransitionTime":"2025-12-01T09:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.028599 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.028655 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.028668 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.028687 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.028700 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:24Z","lastTransitionTime":"2025-12-01T09:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.132434 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.132499 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.132512 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.132539 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.132554 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:24Z","lastTransitionTime":"2025-12-01T09:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.235558 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.235589 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.235598 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.235612 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.235620 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:24Z","lastTransitionTime":"2025-12-01T09:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.338379 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.338420 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.338429 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.338443 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.338452 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:24Z","lastTransitionTime":"2025-12-01T09:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.441546 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.441601 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.441611 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.441631 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.441646 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:24Z","lastTransitionTime":"2025-12-01T09:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.544270 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.544381 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.544402 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.544430 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.544448 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:24Z","lastTransitionTime":"2025-12-01T09:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.646864 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.646919 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.646931 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.646951 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.646963 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:24Z","lastTransitionTime":"2025-12-01T09:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.750379 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.750460 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.750478 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.750507 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.750528 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:24Z","lastTransitionTime":"2025-12-01T09:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.854520 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.854580 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.854590 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.854610 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.854623 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:24Z","lastTransitionTime":"2025-12-01T09:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.957960 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.958032 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.958052 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.958077 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:24 crc kubenswrapper[4933]: I1201 09:33:24.958096 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:24Z","lastTransitionTime":"2025-12-01T09:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.060881 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.060925 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.060934 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.060951 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.060962 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:25Z","lastTransitionTime":"2025-12-01T09:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.163661 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.163715 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.163726 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.163743 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.163753 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:25Z","lastTransitionTime":"2025-12-01T09:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.267351 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.267403 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.267416 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.267437 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.267486 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:25Z","lastTransitionTime":"2025-12-01T09:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.370924 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.370993 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.371014 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.371036 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.371051 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:25Z","lastTransitionTime":"2025-12-01T09:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.474099 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.474169 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.474192 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.474214 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.474228 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:25Z","lastTransitionTime":"2025-12-01T09:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.578125 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.578169 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.578179 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.578197 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.578209 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:25Z","lastTransitionTime":"2025-12-01T09:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.667464 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.667539 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.667603 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:25 crc kubenswrapper[4933]: E1201 09:33:25.667705 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:25 crc kubenswrapper[4933]: E1201 09:33:25.667791 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.667781 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:25 crc kubenswrapper[4933]: E1201 09:33:25.667940 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:25 crc kubenswrapper[4933]: E1201 09:33:25.668062 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.681178 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.681223 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.681235 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.681253 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.681265 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:25Z","lastTransitionTime":"2025-12-01T09:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.784163 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.784224 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.784241 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.784263 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.784274 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:25Z","lastTransitionTime":"2025-12-01T09:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.886764 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.886818 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.886838 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.886856 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.886866 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:25Z","lastTransitionTime":"2025-12-01T09:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.990216 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.990290 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.990350 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.990377 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:25 crc kubenswrapper[4933]: I1201 09:33:25.990396 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:25Z","lastTransitionTime":"2025-12-01T09:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.093889 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.093950 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.093966 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.093986 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.094001 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:26Z","lastTransitionTime":"2025-12-01T09:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.196357 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.196409 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.196422 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.196438 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.196447 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:26Z","lastTransitionTime":"2025-12-01T09:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.300363 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.300457 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.300479 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.300511 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.300530 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:26Z","lastTransitionTime":"2025-12-01T09:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.403426 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.403478 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.403487 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.403504 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.403516 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:26Z","lastTransitionTime":"2025-12-01T09:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.435834 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:26 crc kubenswrapper[4933]: E1201 09:33:26.436133 4933 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:33:26 crc kubenswrapper[4933]: E1201 09:33:26.436259 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs podName:9e67470a-b3fe-4176-b546-fdf28012fce5 nodeName:}" failed. No retries permitted until 2025-12-01 09:34:30.436230804 +0000 UTC m=+161.077954419 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs") pod "network-metrics-daemon-bcqz5" (UID: "9e67470a-b3fe-4176-b546-fdf28012fce5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.505959 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.506014 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.506025 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.506044 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.506059 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:26Z","lastTransitionTime":"2025-12-01T09:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.609213 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.609290 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.609336 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.609358 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.609372 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:26Z","lastTransitionTime":"2025-12-01T09:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.712377 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.712434 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.712442 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.712458 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.712468 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:26Z","lastTransitionTime":"2025-12-01T09:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.814894 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.814960 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.814969 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.814985 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.814999 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:26Z","lastTransitionTime":"2025-12-01T09:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.918361 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.918430 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.918442 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.918463 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:26 crc kubenswrapper[4933]: I1201 09:33:26.918476 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:26Z","lastTransitionTime":"2025-12-01T09:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.021717 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.021772 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.021785 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.021805 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.021816 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:27Z","lastTransitionTime":"2025-12-01T09:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.124755 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.124806 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.124822 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.124844 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.124859 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:27Z","lastTransitionTime":"2025-12-01T09:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.227245 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.227322 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.227334 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.227357 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.227372 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:27Z","lastTransitionTime":"2025-12-01T09:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.330360 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.330426 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.330437 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.330465 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.330478 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:27Z","lastTransitionTime":"2025-12-01T09:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.433240 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.433341 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.433356 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.433380 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.433393 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:27Z","lastTransitionTime":"2025-12-01T09:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.536360 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.536445 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.536469 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.536501 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.536523 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:27Z","lastTransitionTime":"2025-12-01T09:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.639117 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.639150 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.639176 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.639191 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.639200 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:27Z","lastTransitionTime":"2025-12-01T09:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.666722 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:27 crc kubenswrapper[4933]: E1201 09:33:27.666870 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.666752 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.666910 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.666722 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:27 crc kubenswrapper[4933]: E1201 09:33:27.666976 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:27 crc kubenswrapper[4933]: E1201 09:33:27.667032 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:27 crc kubenswrapper[4933]: E1201 09:33:27.667070 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.741474 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.741529 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.741542 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.741563 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.741576 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:27Z","lastTransitionTime":"2025-12-01T09:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.843974 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.844009 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.844022 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.844046 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.844061 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:27Z","lastTransitionTime":"2025-12-01T09:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.946525 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.946573 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.946585 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.946606 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:27 crc kubenswrapper[4933]: I1201 09:33:27.946620 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:27Z","lastTransitionTime":"2025-12-01T09:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.047505 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.048677 4933 scope.go:117] "RemoveContainer" containerID="74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d" Dec 01 09:33:28 crc kubenswrapper[4933]: E1201 09:33:28.048893 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.049352 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.049414 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.049432 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.049452 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.049463 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:28Z","lastTransitionTime":"2025-12-01T09:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.152708 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.152765 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.152777 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.152793 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.152876 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:28Z","lastTransitionTime":"2025-12-01T09:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.255770 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.255832 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.255847 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.255872 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.255889 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:28Z","lastTransitionTime":"2025-12-01T09:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.358366 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.358440 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.358450 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.358471 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.358485 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:28Z","lastTransitionTime":"2025-12-01T09:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.461915 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.461976 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.461987 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.462009 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.462030 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:28Z","lastTransitionTime":"2025-12-01T09:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.564934 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.564990 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.565001 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.565019 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.565031 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:28Z","lastTransitionTime":"2025-12-01T09:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.668585 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.668641 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.668652 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.668672 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.668683 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:28Z","lastTransitionTime":"2025-12-01T09:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.771398 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.771473 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.771484 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.771501 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.771512 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:28Z","lastTransitionTime":"2025-12-01T09:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.874771 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.874858 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.874872 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.874897 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.874914 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:28Z","lastTransitionTime":"2025-12-01T09:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.977994 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.978047 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.978057 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.978076 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:28 crc kubenswrapper[4933]: I1201 09:33:28.978088 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:28Z","lastTransitionTime":"2025-12-01T09:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.081084 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.081131 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.081142 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.081160 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.081171 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:29Z","lastTransitionTime":"2025-12-01T09:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.184902 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.184972 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.184984 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.185069 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.185093 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:29Z","lastTransitionTime":"2025-12-01T09:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.287881 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.287949 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.287961 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.287983 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.287996 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:29Z","lastTransitionTime":"2025-12-01T09:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.392406 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.392471 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.392488 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.392510 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.392524 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:29Z","lastTransitionTime":"2025-12-01T09:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.495895 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.495967 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.495977 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.495993 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.496005 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:29Z","lastTransitionTime":"2025-12-01T09:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.597979 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.598014 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.598048 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.598068 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.598078 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:29Z","lastTransitionTime":"2025-12-01T09:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.666922 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.667015 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.667146 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:29 crc kubenswrapper[4933]: E1201 09:33:29.667144 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.667208 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:29 crc kubenswrapper[4933]: E1201 09:33:29.667389 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:29 crc kubenswrapper[4933]: E1201 09:33:29.667725 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:29 crc kubenswrapper[4933]: E1201 09:33:29.668039 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.683627 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.697573 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a680ea2b-148f-406d-9d17-4a5a953cbe5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 09:32:02.234168 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 09:32:02.235692 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1047304057/tls.crt::/tmp/serving-cert-1047304057/tls.key\\\\\\\"\\\\nI1201 09:32:07.965646 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 09:32:07.969421 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 09:32:07.970151 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 09:32:07.970185 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 09:32:07.970191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 09:32:07.982738 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 09:32:07.982894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 09:32:07.982953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 09:32:07.982979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 09:32:07.983003 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 09:32:07.983027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 09:32:07.982856 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 09:32:07.985539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.702384 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.702425 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.702436 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.702476 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.702489 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:29Z","lastTransitionTime":"2025-12-01T09:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.713060 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab365ae-f377-4f29-8765-1c380536edc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4543a9ce265fa5f4ccd2ba3539eba10057ee086a57ba82164a53cc80aa6f936e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f3b9985e4cdd2bf253d1381600b089eea3470f93bfe97fbf94e32e455c9223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a494af8a203bfa25fd59ca3b717a87d521b75f77c75c012e35e4e1cded2f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.726885 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://013005d729ea158619d51454fa69e770222a197a79358e08e0217d878147671a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.738878 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.750926 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6093b0f3328a8e1cc2405e7cccd4ac939af60e6c53d0583c197a76202c5e500b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.772102 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d49bee31-b7e9-4daa-986f-b6f58c663813\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:33:10Z\\\",\\\"message\\\":\\\"1201 09:33:10.580925 6959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 09:33:10.580934 6959 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 09:33:10.580953 6959 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 09:33:10.580963 6959 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 09:33:10.580958 6959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 09:33:10.580981 6959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 09:33:10.580992 6959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 09:33:10.581004 6959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 09:33:10.581026 6959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 09:33:10.581034 6959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 09:33:10.581064 6959 factory.go:656] Stopping watch factory\\\\nI1201 09:33:10.581080 6959 ovnkube.go:599] Stopped ovnkube\\\\nI1201 09:33:10.581080 6959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 09:33:10.581078 6959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 09:33:10.581094 6959 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 09:33:10.581098 6959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:33:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9968\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zccpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.787327 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31deca5a-8ffe-4967-b02f-98a2043ddb23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d9f01f123bfdd48b1d7caf10eff303f04475e3644849eeb1a445f1bf595efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh2lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4lcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.799647 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e67470a-b3fe-4176-b546-fdf28012fce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bcqz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.804416 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.804452 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.804460 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.804474 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.804485 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:29Z","lastTransitionTime":"2025-12-01T09:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.812149 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7afe09c3-fd9a-47e5-aaf2-6d017a2f0f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4034d688e26719a662808aa5c0756a8cff2b474424f6aff2987cbbf181f9e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b86f15566c5afde426670165750e324859e27846f38fa96071c4e81c1851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b86f15566c5afde426670165750e324859e27846f38fa96071c4e81c1851af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.825168 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25d453a5-e8e5-4563-a6af-2a0190fbe7eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cb9e78d01fb4f20fa14d20f2dd4b044fcedbebda97e0437e562e4c8b5e9072a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9beebf42bdb1ff95c1b5a3faeb820455a7c0fcb764f0b1f3fd892575a95334b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001dd3d1bb28861105ed423a5460657b031a040e934d0c789a766ca3f9499ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d979ab5d82331967f08060ef73b88d2862a1f269f9aaeb7bc7b17904c0c01dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.839032 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9ccb9f1c2142d5f5408356cc43e3480ae5297baac68cf4683407137cc266330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c95158a4796992faf3ffcf5c50b33583a7d8df1decdae1a70c54af5ef767d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.853388 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qvh8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2befd5-f33d-48b0-9873-bf540dc9895c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2262db69605cbfefb4eb5232722a0df72a0a7c1910a0d2b6c9e1989f36e3b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wthcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qvh8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.871089 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae5a541-953b-49b6-8dfa-d19cdd133d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://394798e74d5e23df64b5092d4f6a60763d9c14b7348b0b0ee607066cd3db0b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e535e9c3445b4e008f9af02e7fb93a415a68adf4736c649ae6b38097dc65682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e10817249fd55da6c77fa3c32bab06647f9fa879a957eb405035161332cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccce92f07c3a768f1ebfd7ca4ae4535328f7525f5b93175d682c1fb47db67519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b572855682ba6e8e4f69014d2e6747268b4fcf38aafadd083473efd995332f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://521b28cf2600f5dbd5e03c46d1860b4c336e1767a14fafe95a29ac0750fd9dee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc996c02c8da45c7fe3ceef5d80586a210fd5da56fdb44dc5db880cff3f32a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8zrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ftnw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.886080 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6349096c-1520-4206-a85c-e4b3d12e2a57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be0448561ffbd1804ea3b1d6aa5124a87bdc861f066ec878932aebe7ef8cec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce495e866931fe759415255c08d443d7d5a62e5a746855bffdc0ddb67d6d7cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8g5jg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.906629 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf3e1bb-4324-427c-a121-8d03fbbbbf2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:31:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2735f4a06b7a5b90a9b73750be04fb2598144d207bc7fcff5487142b5ce7845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f985c5d3848b8e8d2b0ad0995a2e5e65ebff87952226a2c74e07f62dd62f41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9539eb1cbec2f844ae9cccd4ac924105f6a11db5e1e03436eb369f3683e3f5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3815386b2976c2ce2dcae87a7aae2ddcfa0a53205ef1d81168c015a58b2385c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee912d789a5b6c5d2c9c7d8574b1975096969f054f46154f669ded20b6f19bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb3dd4183e3b84376101c7a0efbac3df96d9693934a5778bca7ff08e7554b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb3dd4183e3b84376101c7a0efbac3df96d9693934a5778bca7ff08e7554b42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e070bfc462a9f38c7a0ace6b75c51d491b514615d85ca57ca9a5485a653c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e070bfc462a9f38c7a0ace6b75c51d491b514615d85ca57ca9a5485a653c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c94a4d2d4128f74a0a0ecb00b4af1ed2835760620593fee78ca33f43a58d8623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c94a4d2d4128f74a0a0ecb00b4af1ed2835760620593fee78ca33f43a58d8623\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T09:31:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T09:31:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:31:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.907391 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.907427 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.907439 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.907457 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.907470 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:29Z","lastTransitionTime":"2025-12-01T09:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.923750 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.934492 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nzz88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c24a92ea-5279-4bf2-847f-04981f1c330a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eff74b3b1ec243d392cdca7ac929679469df1a648f309542d8bbfe06e79952bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk8cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nzz88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:29 crc kubenswrapper[4933]: I1201 09:33:29.954690 4933 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fncv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T09:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T09:32:57Z\\\",\\\"message\\\":\\\"2025-12-01T09:32:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1\\\\n2025-12-01T09:32:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_792cc42c-a3cc-430c-9e1b-e07d3cee31b1 to /host/opt/cni/bin/\\\\n2025-12-01T09:32:12Z [verbose] multus-daemon started\\\\n2025-12-01T09:32:12Z [verbose] Readiness Indicator file check\\\\n2025-12-01T09:32:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T09:32:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T09:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8p8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T09:32:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fncv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:29Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.010019 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.010077 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.010087 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.010102 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.010111 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:30Z","lastTransitionTime":"2025-12-01T09:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.113778 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.113830 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.113839 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.113860 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.113873 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:30Z","lastTransitionTime":"2025-12-01T09:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.216512 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.216569 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.216579 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.216598 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.216611 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:30Z","lastTransitionTime":"2025-12-01T09:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.319750 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.319830 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.319851 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.319882 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.319900 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:30Z","lastTransitionTime":"2025-12-01T09:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.422556 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.422601 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.422614 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.422629 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.422642 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:30Z","lastTransitionTime":"2025-12-01T09:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.524611 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.524650 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.524658 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.524672 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.524683 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:30Z","lastTransitionTime":"2025-12-01T09:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.626687 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.626788 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.626810 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.626841 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.626862 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:30Z","lastTransitionTime":"2025-12-01T09:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.729954 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.730069 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.730098 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.730129 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.730152 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:30Z","lastTransitionTime":"2025-12-01T09:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.833113 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.833161 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.833170 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.833184 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.833197 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:30Z","lastTransitionTime":"2025-12-01T09:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.936238 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.936301 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.936316 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.936359 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:30 crc kubenswrapper[4933]: I1201 09:33:30.936374 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:30Z","lastTransitionTime":"2025-12-01T09:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.039909 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.039972 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.039989 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.040018 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.040042 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:31Z","lastTransitionTime":"2025-12-01T09:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.143371 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.143431 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.143444 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.143466 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.143480 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:31Z","lastTransitionTime":"2025-12-01T09:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.246673 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.246745 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.246760 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.246785 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.246805 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:31Z","lastTransitionTime":"2025-12-01T09:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.349926 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.349976 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.349991 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.350009 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.350022 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:31Z","lastTransitionTime":"2025-12-01T09:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.453180 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.453229 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.453238 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.453256 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.453269 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:31Z","lastTransitionTime":"2025-12-01T09:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.556157 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.556224 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.556241 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.556267 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.556285 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:31Z","lastTransitionTime":"2025-12-01T09:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.659328 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.659394 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.659404 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.659422 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.659432 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:31Z","lastTransitionTime":"2025-12-01T09:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.666652 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.666652 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.666673 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.666694 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:31 crc kubenswrapper[4933]: E1201 09:33:31.666908 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:31 crc kubenswrapper[4933]: E1201 09:33:31.667204 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:31 crc kubenswrapper[4933]: E1201 09:33:31.667554 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:31 crc kubenswrapper[4933]: E1201 09:33:31.667611 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.771730 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.771798 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.771809 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.771830 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.771842 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:31Z","lastTransitionTime":"2025-12-01T09:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.874718 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.874799 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.874817 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.874836 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.874853 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:31Z","lastTransitionTime":"2025-12-01T09:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.978116 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.978173 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.978194 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.978222 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:31 crc kubenswrapper[4933]: I1201 09:33:31.978241 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:31Z","lastTransitionTime":"2025-12-01T09:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.080792 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.080848 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.080858 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.080874 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.080885 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:32Z","lastTransitionTime":"2025-12-01T09:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.183628 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.183707 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.183729 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.183759 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.183784 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:32Z","lastTransitionTime":"2025-12-01T09:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.286960 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.286992 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.287002 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.287018 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.287031 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:32Z","lastTransitionTime":"2025-12-01T09:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.391119 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.391495 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.391530 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.391560 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.391574 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:32Z","lastTransitionTime":"2025-12-01T09:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.494608 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.494676 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.494690 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.494712 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.494730 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:32Z","lastTransitionTime":"2025-12-01T09:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.598281 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.598374 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.598389 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.598415 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.598428 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:32Z","lastTransitionTime":"2025-12-01T09:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.701424 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.701488 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.701499 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.701520 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.701534 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:32Z","lastTransitionTime":"2025-12-01T09:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.804756 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.804804 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.804816 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.804835 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.804846 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:32Z","lastTransitionTime":"2025-12-01T09:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.908239 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.908299 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.908320 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.908461 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:32 crc kubenswrapper[4933]: I1201 09:33:32.908481 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:32Z","lastTransitionTime":"2025-12-01T09:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.010841 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.010907 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.010919 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.010944 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.010959 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.114067 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.114119 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.114135 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.114153 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.114167 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.197054 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.197100 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.197112 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.197133 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.197144 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: E1201 09:33:33.212991 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:33Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.218516 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.218575 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.218594 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.218619 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.218637 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: E1201 09:33:33.230941 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:33Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.234752 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.234806 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.234818 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.234834 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.234846 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: E1201 09:33:33.248258 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:33Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.253009 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.253049 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.253063 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.253081 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.253094 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: E1201 09:33:33.265323 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:33Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.269293 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.269381 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.269398 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.269423 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.269438 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: E1201 09:33:33.283177 4933 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T09:33:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b561dab6-afeb-4be9-867b-b25a2a946b2a\\\",\\\"systemUUID\\\":\\\"8391db47-1ebd-4bbe-b230-559ad9e10347\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T09:33:33Z is after 2025-08-24T17:21:41Z" Dec 01 09:33:33 crc kubenswrapper[4933]: E1201 09:33:33.283301 4933 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.285296 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.285370 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.285383 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.285402 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.285414 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.388738 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.388805 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.388821 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.388841 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.388853 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.492495 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.492584 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.492610 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.492639 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.492658 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.596719 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.596783 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.596802 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.596827 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.596846 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.667544 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.667651 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.667696 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:33 crc kubenswrapper[4933]: E1201 09:33:33.667891 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:33 crc kubenswrapper[4933]: E1201 09:33:33.668175 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.668238 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:33 crc kubenswrapper[4933]: E1201 09:33:33.668438 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:33 crc kubenswrapper[4933]: E1201 09:33:33.668628 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.700586 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.700657 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.700677 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.700697 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.700711 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.803784 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.803842 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.803861 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.803884 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.803902 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.906877 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.906917 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.906925 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.906941 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:33 crc kubenswrapper[4933]: I1201 09:33:33.906953 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:33Z","lastTransitionTime":"2025-12-01T09:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.009887 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.009959 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.009969 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.009992 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.010006 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:34Z","lastTransitionTime":"2025-12-01T09:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.112919 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.112965 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.112977 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.112996 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.113012 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:34Z","lastTransitionTime":"2025-12-01T09:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.216488 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.216582 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.216606 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.216658 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.216682 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:34Z","lastTransitionTime":"2025-12-01T09:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.320479 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.320526 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.320536 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.320554 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.320567 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:34Z","lastTransitionTime":"2025-12-01T09:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.423793 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.424182 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.424447 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.424634 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.424790 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:34Z","lastTransitionTime":"2025-12-01T09:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.528553 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.528834 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.528909 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.528993 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.529072 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:34Z","lastTransitionTime":"2025-12-01T09:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.631537 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.632449 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.632493 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.632514 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.632524 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:34Z","lastTransitionTime":"2025-12-01T09:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.734933 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.734983 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.734997 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.735011 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.735020 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:34Z","lastTransitionTime":"2025-12-01T09:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.838123 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.838181 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.838195 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.838216 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.838230 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:34Z","lastTransitionTime":"2025-12-01T09:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.941177 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.941909 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.941956 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.941986 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:34 crc kubenswrapper[4933]: I1201 09:33:34.942003 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:34Z","lastTransitionTime":"2025-12-01T09:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.044152 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.044477 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.044602 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.044632 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.044653 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:35Z","lastTransitionTime":"2025-12-01T09:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.157875 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.157923 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.157935 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.157948 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.157959 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:35Z","lastTransitionTime":"2025-12-01T09:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.260731 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.260797 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.260809 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.260825 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.260836 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:35Z","lastTransitionTime":"2025-12-01T09:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.364111 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.364158 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.364171 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.364186 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.364195 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:35Z","lastTransitionTime":"2025-12-01T09:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.466358 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.466397 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.466405 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.466423 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.466439 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:35Z","lastTransitionTime":"2025-12-01T09:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.569764 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.569831 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.569845 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.569875 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.569893 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:35Z","lastTransitionTime":"2025-12-01T09:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.666655 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.666731 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.666749 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.666699 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:35 crc kubenswrapper[4933]: E1201 09:33:35.666858 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:35 crc kubenswrapper[4933]: E1201 09:33:35.667016 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:35 crc kubenswrapper[4933]: E1201 09:33:35.667115 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:35 crc kubenswrapper[4933]: E1201 09:33:35.667206 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.671945 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.671989 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.671998 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.672012 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.672022 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:35Z","lastTransitionTime":"2025-12-01T09:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.774541 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.774580 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.774589 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.774604 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.774615 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:35Z","lastTransitionTime":"2025-12-01T09:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.877621 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.877664 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.877675 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.877691 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.877702 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:35Z","lastTransitionTime":"2025-12-01T09:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.980019 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.980055 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.980065 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.980078 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:35 crc kubenswrapper[4933]: I1201 09:33:35.980088 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:35Z","lastTransitionTime":"2025-12-01T09:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.082171 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.082221 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.082232 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.082252 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.082263 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:36Z","lastTransitionTime":"2025-12-01T09:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.183782 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.183818 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.183827 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.183839 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.183848 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:36Z","lastTransitionTime":"2025-12-01T09:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.286775 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.286845 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.286855 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.286883 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.286892 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:36Z","lastTransitionTime":"2025-12-01T09:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.389196 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.389236 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.389248 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.389264 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.389274 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:36Z","lastTransitionTime":"2025-12-01T09:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.491714 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.491764 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.491777 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.491798 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.491809 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:36Z","lastTransitionTime":"2025-12-01T09:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.594633 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.594675 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.594686 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.594702 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.594715 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:36Z","lastTransitionTime":"2025-12-01T09:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.697645 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.697697 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.697715 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.697739 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.697757 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:36Z","lastTransitionTime":"2025-12-01T09:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.800981 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.801041 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.801058 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.801081 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.801097 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:36Z","lastTransitionTime":"2025-12-01T09:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.903562 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.903606 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.903621 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.903639 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:36 crc kubenswrapper[4933]: I1201 09:33:36.903653 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:36Z","lastTransitionTime":"2025-12-01T09:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.006675 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.006717 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.006727 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.006740 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.006750 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:37Z","lastTransitionTime":"2025-12-01T09:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.113975 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.114014 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.114026 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.114062 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.114074 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:37Z","lastTransitionTime":"2025-12-01T09:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.216068 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.216118 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.216130 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.216147 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.216158 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:37Z","lastTransitionTime":"2025-12-01T09:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.319106 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.319147 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.319156 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.319173 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.319185 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:37Z","lastTransitionTime":"2025-12-01T09:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.423207 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.423271 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.423280 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.423300 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.423330 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:37Z","lastTransitionTime":"2025-12-01T09:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.526852 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.526936 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.526952 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.526975 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.526990 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:37Z","lastTransitionTime":"2025-12-01T09:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.629953 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.630030 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.630043 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.630065 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.630080 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:37Z","lastTransitionTime":"2025-12-01T09:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.667265 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.667371 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.667301 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:37 crc kubenswrapper[4933]: E1201 09:33:37.667525 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:37 crc kubenswrapper[4933]: E1201 09:33:37.667654 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:37 crc kubenswrapper[4933]: E1201 09:33:37.667877 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.667918 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:37 crc kubenswrapper[4933]: E1201 09:33:37.668157 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.733384 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.733427 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.733439 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.733458 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.733468 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:37Z","lastTransitionTime":"2025-12-01T09:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.836016 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.836065 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.836074 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.836094 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.836106 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:37Z","lastTransitionTime":"2025-12-01T09:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.939284 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.939369 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.939386 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.939406 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:37 crc kubenswrapper[4933]: I1201 09:33:37.939421 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:37Z","lastTransitionTime":"2025-12-01T09:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.042651 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.042687 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.042697 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.042714 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.042724 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:38Z","lastTransitionTime":"2025-12-01T09:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.146635 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.146682 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.146691 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.146707 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.146729 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:38Z","lastTransitionTime":"2025-12-01T09:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.250077 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.250114 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.250123 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.250139 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.250149 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:38Z","lastTransitionTime":"2025-12-01T09:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.352828 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.352903 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.352917 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.352938 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.352955 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:38Z","lastTransitionTime":"2025-12-01T09:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.455886 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.455932 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.455943 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.455961 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.455973 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:38Z","lastTransitionTime":"2025-12-01T09:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.559226 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.559300 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.559348 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.559374 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.559396 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:38Z","lastTransitionTime":"2025-12-01T09:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.662453 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.662506 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.662526 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.662548 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.662559 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:38Z","lastTransitionTime":"2025-12-01T09:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.765322 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.765394 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.765409 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.765431 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.765461 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:38Z","lastTransitionTime":"2025-12-01T09:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.868188 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.868297 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.868622 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.868666 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.868677 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:38Z","lastTransitionTime":"2025-12-01T09:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.972466 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.972524 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.972538 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.972555 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:38 crc kubenswrapper[4933]: I1201 09:33:38.972566 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:38Z","lastTransitionTime":"2025-12-01T09:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.075685 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.075728 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.075739 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.075760 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.075772 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:39Z","lastTransitionTime":"2025-12-01T09:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.178165 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.178247 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.178262 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.178281 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.178296 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:39Z","lastTransitionTime":"2025-12-01T09:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.280366 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.280423 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.280432 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.280445 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.280453 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:39Z","lastTransitionTime":"2025-12-01T09:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.382459 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.382502 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.382513 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.382530 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.382542 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:39Z","lastTransitionTime":"2025-12-01T09:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.485197 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.485249 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.485267 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.485285 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.485295 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:39Z","lastTransitionTime":"2025-12-01T09:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.588130 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.588187 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.588199 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.588219 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.588232 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:39Z","lastTransitionTime":"2025-12-01T09:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.666755 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:39 crc kubenswrapper[4933]: E1201 09:33:39.666921 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.666997 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.667001 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.667035 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:39 crc kubenswrapper[4933]: E1201 09:33:39.667159 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:39 crc kubenswrapper[4933]: E1201 09:33:39.667301 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:39 crc kubenswrapper[4933]: E1201 09:33:39.667430 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.690239 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.690286 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.690304 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.690336 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.690350 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:39Z","lastTransitionTime":"2025-12-01T09:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.696590 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ftnw9" podStartSLOduration=91.696560675 podStartE2EDuration="1m31.696560675s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:33:39.696441682 +0000 UTC m=+110.338165297" watchObservedRunningTime="2025-12-01 09:33:39.696560675 +0000 UTC m=+110.338284280" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.738105 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8g5jg" podStartSLOduration=91.73807855 podStartE2EDuration="1m31.73807855s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:33:39.711276569 +0000 UTC m=+110.353000184" watchObservedRunningTime="2025-12-01 09:33:39.73807855 +0000 UTC m=+110.379802165" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.752599 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=30.752572107 podStartE2EDuration="30.752572107s" podCreationTimestamp="2025-12-01 09:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:33:39.737784193 +0000 UTC m=+110.379507838" watchObservedRunningTime="2025-12-01 09:33:39.752572107 +0000 UTC m=+110.394295722" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.766764 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-nzz88" podStartSLOduration=91.766740027 podStartE2EDuration="1m31.766740027s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:33:39.766504741 +0000 UTC m=+110.408228366" watchObservedRunningTime="2025-12-01 09:33:39.766740027 +0000 UTC m=+110.408463642" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.781492 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4fncv" podStartSLOduration=91.78145878 podStartE2EDuration="1m31.78145878s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:33:39.781183294 +0000 UTC m=+110.422906909" watchObservedRunningTime="2025-12-01 09:33:39.78145878 +0000 UTC m=+110.423182395" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.792953 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.792994 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.793006 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.793022 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.793034 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:39Z","lastTransitionTime":"2025-12-01T09:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.816559 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=91.816525695 podStartE2EDuration="1m31.816525695s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:33:39.816267189 +0000 UTC m=+110.457990814" watchObservedRunningTime="2025-12-01 09:33:39.816525695 +0000 UTC m=+110.458249310" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.855219 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=91.855166869 podStartE2EDuration="1m31.855166869s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:33:39.837103163 +0000 UTC m=+110.478826788" watchObservedRunningTime="2025-12-01 09:33:39.855166869 +0000 UTC m=+110.496890484" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.896168 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.896209 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.896218 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.896232 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.896241 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:39Z","lastTransitionTime":"2025-12-01T09:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.941069 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podStartSLOduration=91.941051708 podStartE2EDuration="1m31.941051708s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:33:39.940662688 +0000 UTC m=+110.582386313" watchObservedRunningTime="2025-12-01 09:33:39.941051708 +0000 UTC m=+110.582775323" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.982661 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.982642804 podStartE2EDuration="21.982642804s" podCreationTimestamp="2025-12-01 09:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:33:39.968278229 +0000 UTC m=+110.610001844" watchObservedRunningTime="2025-12-01 09:33:39.982642804 +0000 UTC m=+110.624366419" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.998945 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.998982 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.998994 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.999011 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:39 crc kubenswrapper[4933]: I1201 09:33:39.999025 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:39Z","lastTransitionTime":"2025-12-01T09:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.000414 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=63.000402162 podStartE2EDuration="1m3.000402162s" podCreationTimestamp="2025-12-01 09:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:33:39.983282509 +0000 UTC m=+110.625006124" watchObservedRunningTime="2025-12-01 09:33:40.000402162 +0000 UTC m=+110.642125787" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.101572 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.101613 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.101622 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.101637 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.101646 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:40Z","lastTransitionTime":"2025-12-01T09:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.204212 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.204249 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.204257 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.204270 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.204279 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:40Z","lastTransitionTime":"2025-12-01T09:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.308520 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.308560 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.308573 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.308590 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.308602 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:40Z","lastTransitionTime":"2025-12-01T09:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.411632 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.411701 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.411741 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.411762 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.411775 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:40Z","lastTransitionTime":"2025-12-01T09:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.514732 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.514788 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.514802 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.514821 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.514837 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:40Z","lastTransitionTime":"2025-12-01T09:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.617957 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.618001 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.618015 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.618036 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.618049 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:40Z","lastTransitionTime":"2025-12-01T09:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.667772 4933 scope.go:117] "RemoveContainer" containerID="74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d" Dec 01 09:33:40 crc kubenswrapper[4933]: E1201 09:33:40.667970 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zccpd_openshift-ovn-kubernetes(d49bee31-b7e9-4daa-986f-b6f58c663813)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.721563 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.721616 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.721630 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.721647 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.721672 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:40Z","lastTransitionTime":"2025-12-01T09:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.823991 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.824058 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.824074 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.824093 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.824108 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:40Z","lastTransitionTime":"2025-12-01T09:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.927332 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.927381 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.927391 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.927411 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:40 crc kubenswrapper[4933]: I1201 09:33:40.927423 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:40Z","lastTransitionTime":"2025-12-01T09:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.030280 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.030380 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.030398 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.030425 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.030442 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:41Z","lastTransitionTime":"2025-12-01T09:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.133296 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.133369 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.133380 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.133403 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.133416 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:41Z","lastTransitionTime":"2025-12-01T09:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.236397 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.236433 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.236442 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.236479 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.236491 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:41Z","lastTransitionTime":"2025-12-01T09:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.340147 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.340200 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.340214 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.340234 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.340249 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:41Z","lastTransitionTime":"2025-12-01T09:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.443255 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.443333 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.443346 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.443363 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.443375 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:41Z","lastTransitionTime":"2025-12-01T09:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.545763 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.545817 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.545831 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.545848 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.545864 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:41Z","lastTransitionTime":"2025-12-01T09:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.648276 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.648400 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.648410 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.648427 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.648438 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:41Z","lastTransitionTime":"2025-12-01T09:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.666746 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.666876 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.666920 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:41 crc kubenswrapper[4933]: E1201 09:33:41.666883 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:41 crc kubenswrapper[4933]: E1201 09:33:41.667032 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.667043 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:41 crc kubenswrapper[4933]: E1201 09:33:41.667193 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:41 crc kubenswrapper[4933]: E1201 09:33:41.667287 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.751457 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.751491 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.751501 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.751516 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.751526 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:41Z","lastTransitionTime":"2025-12-01T09:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.853643 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.853698 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.853716 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.853739 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.853758 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:41Z","lastTransitionTime":"2025-12-01T09:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.956210 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.956291 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.956347 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.956391 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:41 crc kubenswrapper[4933]: I1201 09:33:41.956414 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:41Z","lastTransitionTime":"2025-12-01T09:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.059588 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.059628 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.059637 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.059652 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.059661 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:42Z","lastTransitionTime":"2025-12-01T09:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.162657 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.162699 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.162710 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.162728 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.162739 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:42Z","lastTransitionTime":"2025-12-01T09:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.265249 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.265284 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.265294 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.265324 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.265335 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:42Z","lastTransitionTime":"2025-12-01T09:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.368609 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.368679 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.368702 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.368730 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.368752 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:42Z","lastTransitionTime":"2025-12-01T09:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.472023 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.472123 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.472182 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.472207 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.472251 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:42Z","lastTransitionTime":"2025-12-01T09:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.574961 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.575009 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.575026 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.575043 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.575053 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:42Z","lastTransitionTime":"2025-12-01T09:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.677677 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.677777 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.677803 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.677838 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.677864 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:42Z","lastTransitionTime":"2025-12-01T09:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.780861 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.780943 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.780952 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.780973 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.780986 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:42Z","lastTransitionTime":"2025-12-01T09:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.883897 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.883959 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.884011 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.884036 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.884051 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:42Z","lastTransitionTime":"2025-12-01T09:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.986850 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.986949 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.986967 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.986982 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:42 crc kubenswrapper[4933]: I1201 09:33:42.986993 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:42Z","lastTransitionTime":"2025-12-01T09:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.089554 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.089604 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.089615 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.089632 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.089643 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:43Z","lastTransitionTime":"2025-12-01T09:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.192283 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.192388 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.192409 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.192437 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.192459 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:43Z","lastTransitionTime":"2025-12-01T09:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.295301 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.295387 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.295396 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.295416 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.295434 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:43Z","lastTransitionTime":"2025-12-01T09:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.397822 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.397860 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.397872 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.397888 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.397897 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:43Z","lastTransitionTime":"2025-12-01T09:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.500824 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.500869 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.500879 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.500892 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.500902 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:43Z","lastTransitionTime":"2025-12-01T09:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.507044 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.507090 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.507103 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.507122 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.507136 4933 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T09:33:43Z","lastTransitionTime":"2025-12-01T09:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.564862 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qvh8t" podStartSLOduration=95.564844682 podStartE2EDuration="1m35.564844682s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:33:40.014202062 +0000 UTC m=+110.655925677" watchObservedRunningTime="2025-12-01 09:33:43.564844682 +0000 UTC m=+114.206568297" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.565019 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5"] Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.565417 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.569007 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.569278 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.569330 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.570514 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.647049 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.647140 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.647365 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.647467 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.647648 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.666784 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.666899 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:43 crc kubenswrapper[4933]: E1201 09:33:43.666993 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.667185 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:43 crc kubenswrapper[4933]: E1201 09:33:43.667253 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.667363 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:43 crc kubenswrapper[4933]: E1201 09:33:43.667700 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:43 crc kubenswrapper[4933]: E1201 09:33:43.667831 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.749116 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.749264 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.749402 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.749508 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.749569 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.749576 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.749676 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.750297 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.758279 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.771929 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42bc62ba-d8aa-4f93-9f93-29ac627aa84c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qrpr5\" (UID: \"42bc62ba-d8aa-4f93-9f93-29ac627aa84c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:43 crc kubenswrapper[4933]: I1201 09:33:43.879736 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" Dec 01 09:33:44 crc kubenswrapper[4933]: I1201 09:33:44.470437 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4fncv_f0c7b4b8-8e07-4bd4-b811-cdb373873e8a/kube-multus/1.log" Dec 01 09:33:44 crc kubenswrapper[4933]: I1201 09:33:44.472451 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4fncv_f0c7b4b8-8e07-4bd4-b811-cdb373873e8a/kube-multus/0.log" Dec 01 09:33:44 crc kubenswrapper[4933]: I1201 09:33:44.472514 4933 generic.go:334] "Generic (PLEG): container finished" podID="f0c7b4b8-8e07-4bd4-b811-cdb373873e8a" containerID="1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f" exitCode=1 Dec 01 09:33:44 crc kubenswrapper[4933]: I1201 09:33:44.472593 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4fncv" event={"ID":"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a","Type":"ContainerDied","Data":"1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f"} Dec 01 09:33:44 crc kubenswrapper[4933]: I1201 09:33:44.472640 4933 scope.go:117] "RemoveContainer" containerID="8b331ea40475f12defd95b557df2110fec92e02fae33979f30ed7f5a31d79255" Dec 01 09:33:44 crc kubenswrapper[4933]: I1201 09:33:44.473973 4933 scope.go:117] "RemoveContainer" containerID="1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f" Dec 01 09:33:44 crc kubenswrapper[4933]: E1201 09:33:44.474258 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4fncv_openshift-multus(f0c7b4b8-8e07-4bd4-b811-cdb373873e8a)\"" pod="openshift-multus/multus-4fncv" podUID="f0c7b4b8-8e07-4bd4-b811-cdb373873e8a" Dec 01 09:33:44 crc kubenswrapper[4933]: I1201 09:33:44.475153 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" event={"ID":"42bc62ba-d8aa-4f93-9f93-29ac627aa84c","Type":"ContainerStarted","Data":"5cda47285fa8dc80f6a5897cc4576683fdd9704777741e9acefa098c1100d9b8"} Dec 01 09:33:44 crc kubenswrapper[4933]: I1201 09:33:44.475214 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" event={"ID":"42bc62ba-d8aa-4f93-9f93-29ac627aa84c","Type":"ContainerStarted","Data":"68df8cd284e9a39c9ba5a5dcf2eead79d41b3596719a1eb42bb48d1d4d69f1a6"} Dec 01 09:33:44 crc kubenswrapper[4933]: I1201 09:33:44.507107 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qrpr5" podStartSLOduration=96.507077018 podStartE2EDuration="1m36.507077018s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:33:44.506808512 +0000 UTC m=+115.148532127" watchObservedRunningTime="2025-12-01 09:33:44.507077018 +0000 UTC m=+115.148800633" Dec 01 09:33:45 crc kubenswrapper[4933]: I1201 09:33:45.480415 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4fncv_f0c7b4b8-8e07-4bd4-b811-cdb373873e8a/kube-multus/1.log" Dec 01 09:33:45 crc kubenswrapper[4933]: I1201 09:33:45.666862 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:45 crc kubenswrapper[4933]: I1201 09:33:45.667052 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:45 crc kubenswrapper[4933]: I1201 09:33:45.667236 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:45 crc kubenswrapper[4933]: E1201 09:33:45.667324 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:45 crc kubenswrapper[4933]: I1201 09:33:45.667372 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:45 crc kubenswrapper[4933]: E1201 09:33:45.667447 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:45 crc kubenswrapper[4933]: E1201 09:33:45.667201 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:45 crc kubenswrapper[4933]: E1201 09:33:45.667962 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:47 crc kubenswrapper[4933]: I1201 09:33:47.666704 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:47 crc kubenswrapper[4933]: I1201 09:33:47.666695 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:47 crc kubenswrapper[4933]: I1201 09:33:47.666801 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:47 crc kubenswrapper[4933]: I1201 09:33:47.666917 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:47 crc kubenswrapper[4933]: E1201 09:33:47.667051 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:47 crc kubenswrapper[4933]: E1201 09:33:47.667271 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:47 crc kubenswrapper[4933]: E1201 09:33:47.667563 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:47 crc kubenswrapper[4933]: E1201 09:33:47.667560 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:49 crc kubenswrapper[4933]: E1201 09:33:49.587393 4933 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 01 09:33:49 crc kubenswrapper[4933]: I1201 09:33:49.667062 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:49 crc kubenswrapper[4933]: E1201 09:33:49.670121 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:49 crc kubenswrapper[4933]: I1201 09:33:49.670212 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:49 crc kubenswrapper[4933]: I1201 09:33:49.670239 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:49 crc kubenswrapper[4933]: E1201 09:33:49.670350 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:49 crc kubenswrapper[4933]: I1201 09:33:49.670404 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:49 crc kubenswrapper[4933]: E1201 09:33:49.670468 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:49 crc kubenswrapper[4933]: E1201 09:33:49.670668 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:49 crc kubenswrapper[4933]: E1201 09:33:49.953224 4933 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 09:33:51 crc kubenswrapper[4933]: I1201 09:33:51.667178 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:51 crc kubenswrapper[4933]: I1201 09:33:51.667251 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:51 crc kubenswrapper[4933]: E1201 09:33:51.667334 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:51 crc kubenswrapper[4933]: I1201 09:33:51.667484 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:51 crc kubenswrapper[4933]: E1201 09:33:51.667554 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:51 crc kubenswrapper[4933]: E1201 09:33:51.667730 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:51 crc kubenswrapper[4933]: I1201 09:33:51.668075 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:51 crc kubenswrapper[4933]: E1201 09:33:51.668170 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:51 crc kubenswrapper[4933]: I1201 09:33:51.668331 4933 scope.go:117] "RemoveContainer" containerID="74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d" Dec 01 09:33:52 crc kubenswrapper[4933]: I1201 09:33:52.460004 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bcqz5"] Dec 01 09:33:52 crc kubenswrapper[4933]: I1201 09:33:52.507161 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/3.log" Dec 01 09:33:52 crc kubenswrapper[4933]: I1201 09:33:52.509683 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerStarted","Data":"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0"} Dec 01 09:33:52 crc kubenswrapper[4933]: I1201 09:33:52.509737 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:52 crc kubenswrapper[4933]: E1201 09:33:52.509873 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:52 crc kubenswrapper[4933]: I1201 09:33:52.510170 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:33:53 crc kubenswrapper[4933]: I1201 09:33:53.667163 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:53 crc kubenswrapper[4933]: E1201 09:33:53.667285 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:53 crc kubenswrapper[4933]: I1201 09:33:53.667370 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:53 crc kubenswrapper[4933]: E1201 09:33:53.667556 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:53 crc kubenswrapper[4933]: I1201 09:33:53.667868 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:53 crc kubenswrapper[4933]: E1201 09:33:53.668132 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:54 crc kubenswrapper[4933]: I1201 09:33:54.666706 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:54 crc kubenswrapper[4933]: E1201 09:33:54.666870 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:54 crc kubenswrapper[4933]: E1201 09:33:54.954197 4933 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 09:33:55 crc kubenswrapper[4933]: I1201 09:33:55.666744 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:55 crc kubenswrapper[4933]: I1201 09:33:55.666802 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:55 crc kubenswrapper[4933]: I1201 09:33:55.666743 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:55 crc kubenswrapper[4933]: E1201 09:33:55.666923 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:55 crc kubenswrapper[4933]: E1201 09:33:55.667030 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:55 crc kubenswrapper[4933]: E1201 09:33:55.667088 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:56 crc kubenswrapper[4933]: I1201 09:33:56.666488 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:56 crc kubenswrapper[4933]: E1201 09:33:56.667055 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:57 crc kubenswrapper[4933]: I1201 09:33:57.667865 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:57 crc kubenswrapper[4933]: I1201 09:33:57.667917 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:57 crc kubenswrapper[4933]: E1201 09:33:57.668014 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:33:57 crc kubenswrapper[4933]: I1201 09:33:57.668073 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:57 crc kubenswrapper[4933]: E1201 09:33:57.668339 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:57 crc kubenswrapper[4933]: I1201 09:33:57.668691 4933 scope.go:117] "RemoveContainer" containerID="1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f" Dec 01 09:33:57 crc kubenswrapper[4933]: E1201 09:33:57.668747 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:57 crc kubenswrapper[4933]: I1201 09:33:57.694757 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podStartSLOduration=109.694736078 podStartE2EDuration="1m49.694736078s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:33:52.539337987 +0000 UTC m=+123.181061602" watchObservedRunningTime="2025-12-01 09:33:57.694736078 +0000 UTC m=+128.336459693" Dec 01 09:33:58 crc kubenswrapper[4933]: I1201 09:33:58.063604 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:33:58 crc kubenswrapper[4933]: I1201 09:33:58.536359 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4fncv_f0c7b4b8-8e07-4bd4-b811-cdb373873e8a/kube-multus/1.log" Dec 01 09:33:58 crc kubenswrapper[4933]: I1201 09:33:58.536445 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4fncv" event={"ID":"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a","Type":"ContainerStarted","Data":"fa67bf1207226c8b6f7a005f4e479007b6cf584107103b695e65b9c6c160fbed"} Dec 01 09:33:58 crc kubenswrapper[4933]: I1201 09:33:58.666839 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:33:58 crc kubenswrapper[4933]: E1201 09:33:58.667086 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bcqz5" podUID="9e67470a-b3fe-4176-b546-fdf28012fce5" Dec 01 09:33:59 crc kubenswrapper[4933]: I1201 09:33:59.666868 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:33:59 crc kubenswrapper[4933]: I1201 09:33:59.666906 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:33:59 crc kubenswrapper[4933]: I1201 09:33:59.666868 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:33:59 crc kubenswrapper[4933]: E1201 09:33:59.667984 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 09:33:59 crc kubenswrapper[4933]: E1201 09:33:59.668071 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 09:33:59 crc kubenswrapper[4933]: E1201 09:33:59.668119 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 09:34:00 crc kubenswrapper[4933]: I1201 09:34:00.667092 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:34:00 crc kubenswrapper[4933]: I1201 09:34:00.671030 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 01 09:34:00 crc kubenswrapper[4933]: I1201 09:34:00.671058 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 01 09:34:01 crc kubenswrapper[4933]: I1201 09:34:01.667600 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:34:01 crc kubenswrapper[4933]: I1201 09:34:01.667702 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:34:01 crc kubenswrapper[4933]: I1201 09:34:01.667604 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:34:01 crc kubenswrapper[4933]: I1201 09:34:01.670513 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 01 09:34:01 crc kubenswrapper[4933]: I1201 09:34:01.671400 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 01 09:34:01 crc kubenswrapper[4933]: I1201 09:34:01.671402 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 01 09:34:01 crc kubenswrapper[4933]: I1201 09:34:01.671998 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.074717 4933 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.121418 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v9qqn"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.122161 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.125269 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.125929 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.126648 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.127111 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.128051 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.128230 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.128332 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.128483 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.128766 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.128995 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.129929 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hpmth"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.130552 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.131579 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dtcjv"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.138640 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.142151 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.144324 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.166682 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.169359 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4wqht"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.169876 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.172792 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.172993 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.178633 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.178711 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179002 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179016 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179091 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179134 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179229 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179350 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179452 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179553 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179575 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179690 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179738 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179454 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179814 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179149 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179909 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.179999 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.180050 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.180088 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.180185 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.180206 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.180922 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nwhhr"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.181344 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.188611 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.192869 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.196365 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.196467 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.196506 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.196553 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.196693 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.196822 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.199057 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.200978 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.201398 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.201533 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.201739 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.202133 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.202248 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.203670 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.203982 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.204470 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.204938 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-x74qn"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.204999 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.205172 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.210448 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.211981 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.212116 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.212166 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.212660 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.213027 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.213170 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sncl8"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.213407 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.213568 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.214105 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.214209 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.223508 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sncl8" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.226600 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.227246 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.228518 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.246093 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-v4fq8"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.246430 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.246599 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.246744 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.246998 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-v4fq8" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.248123 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.248233 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.248488 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.248559 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.248639 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.248645 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.248499 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.248850 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.248958 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.248970 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.249002 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.249026 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.249122 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.249160 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.249465 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.249187 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.249776 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xdrhr"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.250044 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.250187 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.250348 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.250445 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.255179 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.255740 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.255947 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.256150 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.256321 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.257006 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.257127 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.258763 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.258954 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.259915 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.263053 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vwm48"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.264944 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.265130 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.265295 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.265451 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.265618 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.265777 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.265919 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.266031 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.266183 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.267983 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.269622 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.270035 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.270050 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.270538 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.270855 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.270992 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.271251 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.272116 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.272274 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.272529 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.273119 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.273184 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.277205 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.277370 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.277539 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.278288 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.280752 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wnvhn"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.281892 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.282246 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.282655 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.282940 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283059 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283094 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283123 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh2bz\" (UniqueName: \"kubernetes.io/projected/b805d945-8eed-48d3-9547-560266e5dfb1-kube-api-access-wh2bz\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283146 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283163 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283226 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjpbt\" (UniqueName: \"kubernetes.io/projected/54960e89-4e49-4c21-bea4-cc46fcf8edba-kube-api-access-tjpbt\") pod \"route-controller-manager-6576b87f9c-sr8zs\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283252 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46ceb35e-f316-481d-acdc-c61617f13e5f-service-ca-bundle\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283272 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-serving-cert\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283289 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6-trusted-ca\") pod \"console-operator-58897d9998-nwhhr\" (UID: \"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6\") " pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283325 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-audit\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283345 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-audit-policies\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283364 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283381 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-etcd-client\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283396 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-audit-dir\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283412 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54960e89-4e49-4c21-bea4-cc46fcf8edba-config\") pod \"route-controller-manager-6576b87f9c-sr8zs\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283431 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-config\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283448 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2020a25e-c390-4919-8f4f-3472caca4c14-etcd-client\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.283466 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b805d945-8eed-48d3-9547-560266e5dfb1-audit-dir\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.289641 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.289693 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77d64c85-edec-42f6-9c3e-7bbbf04cc84e-serving-cert\") pod \"openshift-config-operator-7777fb866f-sh7dc\" (UID: \"77d64c85-edec-42f6-9c3e-7bbbf04cc84e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.289735 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8f0888-39cd-4813-8f5b-ba725fb15ee5-serving-cert\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.289963 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-image-import-ca\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.290022 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v9qqn"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.290214 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2020a25e-c390-4919-8f4f-3472caca4c14-audit-dir\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.290325 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d1dbe4f-c837-4206-aa33-8ad657a3f4e5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8pjf2\" (UID: \"5d1dbe4f-c837-4206-aa33-8ad657a3f4e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.290743 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nwhhr"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.292778 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ceb35e-f316-481d-acdc-c61617f13e5f-config\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.292850 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2020a25e-c390-4919-8f4f-3472caca4c14-node-pullsecrets\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.292904 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2020a25e-c390-4919-8f4f-3472caca4c14-serving-cert\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.292971 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh7t7\" (UniqueName: \"kubernetes.io/projected/45bbe65f-8e73-4b73-863c-15db667e3e22-kube-api-access-lh7t7\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.293052 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-service-ca\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.293134 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mckl6\" (UniqueName: \"kubernetes.io/projected/5d1dbe4f-c837-4206-aa33-8ad657a3f4e5-kube-api-access-mckl6\") pod \"openshift-apiserver-operator-796bbdcf4f-8pjf2\" (UID: \"5d1dbe4f-c837-4206-aa33-8ad657a3f4e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.293247 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.293324 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvzlq\" (UniqueName: \"kubernetes.io/projected/77d64c85-edec-42f6-9c3e-7bbbf04cc84e-kube-api-access-lvzlq\") pod \"openshift-config-operator-7777fb866f-sh7dc\" (UID: \"77d64c85-edec-42f6-9c3e-7bbbf04cc84e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.293354 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6-config\") pod \"console-operator-58897d9998-nwhhr\" (UID: \"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6\") " pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.293405 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf8pl\" (UniqueName: \"kubernetes.io/projected/2020a25e-c390-4919-8f4f-3472caca4c14-kube-api-access-gf8pl\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.293440 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.293470 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.294091 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-znqzs"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.294087 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/45bbe65f-8e73-4b73-863c-15db667e3e22-console-oauth-config\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.294679 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.294745 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.294786 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-trusted-ca-bundle\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.294861 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d55vr\" (UniqueName: \"kubernetes.io/projected/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-kube-api-access-d55vr\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.294897 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54960e89-4e49-4c21-bea4-cc46fcf8edba-serving-cert\") pod \"route-controller-manager-6576b87f9c-sr8zs\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.294940 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.294963 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-audit-policies\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295002 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295041 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/77d64c85-edec-42f6-9c3e-7bbbf04cc84e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sh7dc\" (UID: \"77d64c85-edec-42f6-9c3e-7bbbf04cc84e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295085 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bbe65f-8e73-4b73-863c-15db667e3e22-console-serving-cert\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295112 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a594f887-35b0-4757-9522-e22b68536bca-machine-approver-tls\") pod \"machine-approver-56656f9798-qbr9b\" (UID: \"a594f887-35b0-4757-9522-e22b68536bca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295163 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjm58\" (UniqueName: \"kubernetes.io/projected/1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6-kube-api-access-hjm58\") pod \"console-operator-58897d9998-nwhhr\" (UID: \"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6\") " pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295198 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2020a25e-c390-4919-8f4f-3472caca4c14-encryption-config\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295225 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ceb35e-f316-481d-acdc-c61617f13e5f-serving-cert\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295256 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-encryption-config\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295344 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6-serving-cert\") pod \"console-operator-58897d9998-nwhhr\" (UID: \"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6\") " pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295380 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-console-config\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295421 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295465 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-oauth-serving-cert\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295500 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-config\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295523 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-client-ca\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295553 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxqcs\" (UniqueName: \"kubernetes.io/projected/46ceb35e-f316-481d-acdc-c61617f13e5f-kube-api-access-sxqcs\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295576 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4z57\" (UniqueName: \"kubernetes.io/projected/a594f887-35b0-4757-9522-e22b68536bca-kube-api-access-l4z57\") pod \"machine-approver-56656f9798-qbr9b\" (UID: \"a594f887-35b0-4757-9522-e22b68536bca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295616 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a594f887-35b0-4757-9522-e22b68536bca-auth-proxy-config\") pod \"machine-approver-56656f9798-qbr9b\" (UID: \"a594f887-35b0-4757-9522-e22b68536bca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295647 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d1dbe4f-c837-4206-aa33-8ad657a3f4e5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8pjf2\" (UID: \"5d1dbe4f-c837-4206-aa33-8ad657a3f4e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295696 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmk2b\" (UniqueName: \"kubernetes.io/projected/da8f0888-39cd-4813-8f5b-ba725fb15ee5-kube-api-access-kmk2b\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295715 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295736 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46ceb35e-f316-481d-acdc-c61617f13e5f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295763 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54960e89-4e49-4c21-bea4-cc46fcf8edba-client-ca\") pod \"route-controller-manager-6576b87f9c-sr8zs\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295794 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.295818 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-etcd-serving-ca\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.297648 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.297912 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.297981 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a594f887-35b0-4757-9522-e22b68536bca-config\") pod \"machine-approver-56656f9798-qbr9b\" (UID: \"a594f887-35b0-4757-9522-e22b68536bca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.299029 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dtcjv"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.303923 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.309676 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.309905 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g9j6m"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.310824 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zj2bn"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.311213 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9j6m" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.311784 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.312016 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.312968 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.313614 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.313778 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.313841 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.314931 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.315469 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.316424 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.316509 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.317106 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.317845 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.322555 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2clwd"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.323458 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.324617 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lmgsj"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.324835 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.325202 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.326569 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.326607 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.327366 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-84fbz"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.328253 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.336108 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.336770 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.339787 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vwm48"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.341774 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.342969 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x74qn"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.343805 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xdrhr"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.344875 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.346150 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.347519 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.348332 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.349237 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.350061 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vbvt7"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.350611 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vbvt7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.351225 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4wqht"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.352365 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.353414 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-v4fq8"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.354783 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.356464 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.357837 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sncl8"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.358839 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2clwd"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.359822 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hpmth"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.361221 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.362121 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lmgsj"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.363494 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.369316 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.376265 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.376360 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.391723 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.392043 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.392098 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.392108 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zj2bn"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.398373 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q5ch5"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.399940 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.400862 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2020a25e-c390-4919-8f4f-3472caca4c14-encryption-config\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.400908 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91787165-ff9b-4d6c-8d81-a6efc4bdb19a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bsjj4\" (UID: \"91787165-ff9b-4d6c-8d81-a6efc4bdb19a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.400938 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-encryption-config\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.400962 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/797e5656-fa46-48f0-a336-40560b3da3a5-metrics-tls\") pod \"dns-default-lmgsj\" (UID: \"797e5656-fa46-48f0-a336-40560b3da3a5\") " pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.400996 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/29cdc67d-6d2a-44b2-bd31-3634aff7f52e-images\") pod \"machine-api-operator-5694c8668f-xdrhr\" (UID: \"29cdc67d-6d2a-44b2-bd31-3634aff7f52e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401021 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p2sk\" (UniqueName: \"kubernetes.io/projected/9074f73d-a336-4a52-960b-b18e219d12a5-kube-api-access-6p2sk\") pod \"package-server-manager-789f6589d5-x82fl\" (UID: \"9074f73d-a336-4a52-960b-b18e219d12a5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401052 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-oauth-serving-cert\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401075 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-client-ca\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401098 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4z57\" (UniqueName: \"kubernetes.io/projected/a594f887-35b0-4757-9522-e22b68536bca-kube-api-access-l4z57\") pod \"machine-approver-56656f9798-qbr9b\" (UID: \"a594f887-35b0-4757-9522-e22b68536bca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401123 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e69273f1-a871-402d-bb09-2150ee1134b1-config\") pod \"kube-apiserver-operator-766d6c64bb-htq9g\" (UID: \"e69273f1-a871-402d-bb09-2150ee1134b1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401151 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a594f887-35b0-4757-9522-e22b68536bca-auth-proxy-config\") pod \"machine-approver-56656f9798-qbr9b\" (UID: \"a594f887-35b0-4757-9522-e22b68536bca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401176 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9vqg\" (UniqueName: \"kubernetes.io/projected/78d9587b-0bca-4439-8518-f652be926d70-kube-api-access-t9vqg\") pod \"multus-admission-controller-857f4d67dd-g9j6m\" (UID: \"78d9587b-0bca-4439-8518-f652be926d70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9j6m" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401199 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjp6t\" (UniqueName: \"kubernetes.io/projected/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-kube-api-access-gjp6t\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401221 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp22d\" (UniqueName: \"kubernetes.io/projected/c121048e-9df5-412a-9d86-e7cf8a59d0e1-kube-api-access-tp22d\") pod \"downloads-7954f5f757-v4fq8\" (UID: \"c121048e-9df5-412a-9d86-e7cf8a59d0e1\") " pod="openshift-console/downloads-7954f5f757-v4fq8" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401243 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d1dbe4f-c837-4206-aa33-8ad657a3f4e5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8pjf2\" (UID: \"5d1dbe4f-c837-4206-aa33-8ad657a3f4e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401265 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91787165-ff9b-4d6c-8d81-a6efc4bdb19a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bsjj4\" (UID: \"91787165-ff9b-4d6c-8d81-a6efc4bdb19a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401290 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54960e89-4e49-4c21-bea4-cc46fcf8edba-client-ca\") pod \"route-controller-manager-6576b87f9c-sr8zs\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401353 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401395 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401422 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401445 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a594f887-35b0-4757-9522-e22b68536bca-config\") pod \"machine-approver-56656f9798-qbr9b\" (UID: \"a594f887-35b0-4757-9522-e22b68536bca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401470 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e69273f1-a871-402d-bb09-2150ee1134b1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-htq9g\" (UID: \"e69273f1-a871-402d-bb09-2150ee1134b1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401496 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-stats-auth\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401519 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/375b62a4-7c53-4d18-8bf9-f9378321a8de-metrics-tls\") pod \"ingress-operator-5b745b69d9-dnfn7\" (UID: \"375b62a4-7c53-4d18-8bf9-f9378321a8de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401545 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401568 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401593 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9b3e0393-fe92-4ca0-b9ce-85bceddbfad4-tmpfs\") pod \"packageserver-d55dfcdfc-n22nk\" (UID: \"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401622 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46ceb35e-f316-481d-acdc-c61617f13e5f-service-ca-bundle\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401645 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6-trusted-ca\") pod \"console-operator-58897d9998-nwhhr\" (UID: \"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6\") " pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401670 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd22a22c-e9a1-4ca8-991c-100337423ece-serving-cert\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401692 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmxxz\" (UniqueName: \"kubernetes.io/projected/c78b2b58-b81d-4a67-b879-9812138fdd29-kube-api-access-vmxxz\") pod \"olm-operator-6b444d44fb-x7jpf\" (UID: \"c78b2b58-b81d-4a67-b879-9812138fdd29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401713 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b3e0393-fe92-4ca0-b9ce-85bceddbfad4-webhook-cert\") pod \"packageserver-d55dfcdfc-n22nk\" (UID: \"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401739 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-audit-policies\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401760 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401783 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fd22a22c-e9a1-4ca8-991c-100337423ece-etcd-ca\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401806 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54960e89-4e49-4c21-bea4-cc46fcf8edba-config\") pod \"route-controller-manager-6576b87f9c-sr8zs\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401829 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-config\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401852 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77d64c85-edec-42f6-9c3e-7bbbf04cc84e-serving-cert\") pod \"openshift-config-operator-7777fb866f-sh7dc\" (UID: \"77d64c85-edec-42f6-9c3e-7bbbf04cc84e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401872 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8f0888-39cd-4813-8f5b-ba725fb15ee5-serving-cert\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401894 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c78b2b58-b81d-4a67-b879-9812138fdd29-srv-cert\") pod \"olm-operator-6b444d44fb-x7jpf\" (UID: \"c78b2b58-b81d-4a67-b879-9812138fdd29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401915 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91787165-ff9b-4d6c-8d81-a6efc4bdb19a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bsjj4\" (UID: \"91787165-ff9b-4d6c-8d81-a6efc4bdb19a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401937 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh7t7\" (UniqueName: \"kubernetes.io/projected/45bbe65f-8e73-4b73-863c-15db667e3e22-kube-api-access-lh7t7\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401960 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-service-ca\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.401980 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd22a22c-e9a1-4ca8-991c-100337423ece-config\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.402003 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.402024 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/325978fb-e819-4d17-af79-821ee41da615-srv-cert\") pod \"catalog-operator-68c6474976-p4mnm\" (UID: \"325978fb-e819-4d17-af79-821ee41da615\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.402050 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/45bbe65f-8e73-4b73-863c-15db667e3e22-console-oauth-config\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.402072 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvsp5\" (UniqueName: \"kubernetes.io/projected/325978fb-e819-4d17-af79-821ee41da615-kube-api-access-cvsp5\") pod \"catalog-operator-68c6474976-p4mnm\" (UID: \"325978fb-e819-4d17-af79-821ee41da615\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.402107 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.402132 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-service-ca-bundle\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.402155 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54960e89-4e49-4c21-bea4-cc46fcf8edba-serving-cert\") pod \"route-controller-manager-6576b87f9c-sr8zs\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.402178 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9b3e0393-fe92-4ca0-b9ce-85bceddbfad4-apiservice-cert\") pod \"packageserver-d55dfcdfc-n22nk\" (UID: \"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.402203 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c78b2b58-b81d-4a67-b879-9812138fdd29-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x7jpf\" (UID: \"c78b2b58-b81d-4a67-b879-9812138fdd29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.402228 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/375b62a4-7c53-4d18-8bf9-f9378321a8de-trusted-ca\") pod \"ingress-operator-5b745b69d9-dnfn7\" (UID: \"375b62a4-7c53-4d18-8bf9-f9378321a8de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.402254 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/77d64c85-edec-42f6-9c3e-7bbbf04cc84e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sh7dc\" (UID: \"77d64c85-edec-42f6-9c3e-7bbbf04cc84e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.402283 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bbe65f-8e73-4b73-863c-15db667e3e22-console-serving-cert\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.406508 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46ceb35e-f316-481d-acdc-c61617f13e5f-service-ca-bundle\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.408646 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54960e89-4e49-4c21-bea4-cc46fcf8edba-client-ca\") pod \"route-controller-manager-6576b87f9c-sr8zs\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.410914 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.414362 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a594f887-35b0-4757-9522-e22b68536bca-machine-approver-tls\") pod \"machine-approver-56656f9798-qbr9b\" (UID: \"a594f887-35b0-4757-9522-e22b68536bca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.428338 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g9j6m"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.430250 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-encryption-config\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.430268 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2020a25e-c390-4919-8f4f-3472caca4c14-encryption-config\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.432668 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vbvt7"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.434891 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.441960 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-oauth-serving-cert\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.444205 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-84fbz"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.445658 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.445977 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a594f887-35b0-4757-9522-e22b68536bca-config\") pod \"machine-approver-56656f9798-qbr9b\" (UID: \"a594f887-35b0-4757-9522-e22b68536bca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.446673 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.446704 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.446838 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a594f887-35b0-4757-9522-e22b68536bca-auth-proxy-config\") pod \"machine-approver-56656f9798-qbr9b\" (UID: \"a594f887-35b0-4757-9522-e22b68536bca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.446875 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.447528 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-client-ca\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.448263 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d1dbe4f-c837-4206-aa33-8ad657a3f4e5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8pjf2\" (UID: \"5d1dbe4f-c837-4206-aa33-8ad657a3f4e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.448286 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54960e89-4e49-4c21-bea4-cc46fcf8edba-config\") pod \"route-controller-manager-6576b87f9c-sr8zs\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.448356 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/77d64c85-edec-42f6-9c3e-7bbbf04cc84e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sh7dc\" (UID: \"77d64c85-edec-42f6-9c3e-7bbbf04cc84e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.448415 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.449050 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9pf\" (UniqueName: \"kubernetes.io/projected/1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107-kube-api-access-cf9pf\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzxkd\" (UID: \"1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.449064 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-audit-policies\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.449124 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjm58\" (UniqueName: \"kubernetes.io/projected/1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6-kube-api-access-hjm58\") pod \"console-operator-58897d9998-nwhhr\" (UID: \"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6\") " pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.449174 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ceb35e-f316-481d-acdc-c61617f13e5f-serving-cert\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.449202 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/325978fb-e819-4d17-af79-821ee41da615-profile-collector-cert\") pod \"catalog-operator-68c6474976-p4mnm\" (UID: \"325978fb-e819-4d17-af79-821ee41da615\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.449235 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/590153a9-670a-4443-853c-e1bd935d57c3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rt422\" (UID: \"590153a9-670a-4443-853c-e1bd935d57c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.449271 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6-serving-cert\") pod \"console-operator-58897d9998-nwhhr\" (UID: \"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6\") " pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.449297 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd22a22c-e9a1-4ca8-991c-100337423ece-etcd-client\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.449650 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-config\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.449980 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.450364 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6-trusted-ca\") pod \"console-operator-58897d9998-nwhhr\" (UID: \"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6\") " pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.450410 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-t76mv"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.452688 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ceb35e-f316-481d-acdc-c61617f13e5f-serving-cert\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.451783 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-service-ca\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.451969 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-console-config\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.451192 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-console-config\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.451971 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/45bbe65f-8e73-4b73-863c-15db667e3e22-console-oauth-config\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.452255 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bbe65f-8e73-4b73-863c-15db667e3e22-console-serving-cert\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.452835 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a594f887-35b0-4757-9522-e22b68536bca-machine-approver-tls\") pod \"machine-approver-56656f9798-qbr9b\" (UID: \"a594f887-35b0-4757-9522-e22b68536bca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.452866 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.452981 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-config\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453046 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxqcs\" (UniqueName: \"kubernetes.io/projected/46ceb35e-f316-481d-acdc-c61617f13e5f-kube-api-access-sxqcs\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453132 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zj2bn\" (UID: \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453192 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69273f1-a871-402d-bb09-2150ee1134b1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-htq9g\" (UID: \"e69273f1-a871-402d-bb09-2150ee1134b1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453277 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxwbv\" (UniqueName: \"kubernetes.io/projected/91787165-ff9b-4d6c-8d81-a6efc4bdb19a-kube-api-access-bxwbv\") pod \"cluster-image-registry-operator-dc59b4c8b-bsjj4\" (UID: \"91787165-ff9b-4d6c-8d81-a6efc4bdb19a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453345 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scpz7\" (UniqueName: \"kubernetes.io/projected/29cdc67d-6d2a-44b2-bd31-3634aff7f52e-kube-api-access-scpz7\") pod \"machine-api-operator-5694c8668f-xdrhr\" (UID: \"29cdc67d-6d2a-44b2-bd31-3634aff7f52e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453407 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmk2b\" (UniqueName: \"kubernetes.io/projected/da8f0888-39cd-4813-8f5b-ba725fb15ee5-kube-api-access-kmk2b\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453437 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46ceb35e-f316-481d-acdc-c61617f13e5f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453495 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29cdc67d-6d2a-44b2-bd31-3634aff7f52e-config\") pod \"machine-api-operator-5694c8668f-xdrhr\" (UID: \"29cdc67d-6d2a-44b2-bd31-3634aff7f52e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453593 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-etcd-serving-ca\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453635 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4s5l\" (UniqueName: \"kubernetes.io/projected/fd22a22c-e9a1-4ca8-991c-100337423ece-kube-api-access-w4s5l\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453693 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/29cdc67d-6d2a-44b2-bd31-3634aff7f52e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xdrhr\" (UID: \"29cdc67d-6d2a-44b2-bd31-3634aff7f52e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453658 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453718 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd22a22c-e9a1-4ca8-991c-100337423ece-etcd-service-ca\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453865 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h52m\" (UniqueName: \"kubernetes.io/projected/797e5656-fa46-48f0-a336-40560b3da3a5-kube-api-access-7h52m\") pod \"dns-default-lmgsj\" (UID: \"797e5656-fa46-48f0-a336-40560b3da3a5\") " pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453904 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.453932 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh2bz\" (UniqueName: \"kubernetes.io/projected/b805d945-8eed-48d3-9547-560266e5dfb1-kube-api-access-wh2bz\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.454024 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.454029 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77d64c85-edec-42f6-9c3e-7bbbf04cc84e-serving-cert\") pod \"openshift-config-operator-7777fb866f-sh7dc\" (UID: \"77d64c85-edec-42f6-9c3e-7bbbf04cc84e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.454136 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zj2bn\" (UID: \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.454399 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4nw\" (UniqueName: \"kubernetes.io/projected/375b62a4-7c53-4d18-8bf9-f9378321a8de-kube-api-access-bf4nw\") pod \"ingress-operator-5b745b69d9-dnfn7\" (UID: \"375b62a4-7c53-4d18-8bf9-f9378321a8de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.454460 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-config\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.454486 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjpbt\" (UniqueName: \"kubernetes.io/projected/54960e89-4e49-4c21-bea4-cc46fcf8edba-kube-api-access-tjpbt\") pod \"route-controller-manager-6576b87f9c-sr8zs\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.454699 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9074f73d-a336-4a52-960b-b18e219d12a5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-x82fl\" (UID: \"9074f73d-a336-4a52-960b-b18e219d12a5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.454875 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-serving-cert\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.454878 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-etcd-serving-ca\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.455015 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-audit\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.455079 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmtsv\" (UniqueName: \"kubernetes.io/projected/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-kube-api-access-jmtsv\") pod \"marketplace-operator-79b997595-zj2bn\" (UID: \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.455179 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/375b62a4-7c53-4d18-8bf9-f9378321a8de-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dnfn7\" (UID: \"375b62a4-7c53-4d18-8bf9-f9378321a8de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.455360 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-default-certificate\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.455477 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-metrics-certs\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.455537 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b805d945-8eed-48d3-9547-560266e5dfb1-audit-dir\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.455586 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-etcd-client\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.455610 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-audit-dir\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.455636 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2020a25e-c390-4919-8f4f-3472caca4c14-etcd-client\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.455764 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-image-import-ca\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.455904 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-audit-dir\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.455998 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.456054 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2020a25e-c390-4919-8f4f-3472caca4c14-audit-dir\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.456107 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d1dbe4f-c837-4206-aa33-8ad657a3f4e5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8pjf2\" (UID: \"5d1dbe4f-c837-4206-aa33-8ad657a3f4e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.456140 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzxkd\" (UID: \"1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.456230 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ceb35e-f316-481d-acdc-c61617f13e5f-config\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.456235 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46ceb35e-f316-481d-acdc-c61617f13e5f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.456343 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-audit\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.456492 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2020a25e-c390-4919-8f4f-3472caca4c14-node-pullsecrets\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.458329 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.457411 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2020a25e-c390-4919-8f4f-3472caca4c14-image-import-ca\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.457430 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-t76mv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.458439 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ceb35e-f316-481d-acdc-c61617f13e5f-config\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.457472 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b805d945-8eed-48d3-9547-560266e5dfb1-audit-dir\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.457497 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2020a25e-c390-4919-8f4f-3472caca4c14-node-pullsecrets\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.457515 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2020a25e-c390-4919-8f4f-3472caca4c14-audit-dir\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.457861 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-serving-cert\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.457293 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.458731 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2020a25e-c390-4919-8f4f-3472caca4c14-serving-cert\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.458999 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54960e89-4e49-4c21-bea4-cc46fcf8edba-serving-cert\") pod \"route-controller-manager-6576b87f9c-sr8zs\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459086 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mckl6\" (UniqueName: \"kubernetes.io/projected/5d1dbe4f-c837-4206-aa33-8ad657a3f4e5-kube-api-access-mckl6\") pod \"openshift-apiserver-operator-796bbdcf4f-8pjf2\" (UID: \"5d1dbe4f-c837-4206-aa33-8ad657a3f4e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459233 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797e5656-fa46-48f0-a336-40560b3da3a5-config-volume\") pod \"dns-default-lmgsj\" (UID: \"797e5656-fa46-48f0-a336-40560b3da3a5\") " pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459276 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzxkd\" (UID: \"1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459352 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvrp4\" (UniqueName: \"kubernetes.io/projected/9b3e0393-fe92-4ca0-b9ce-85bceddbfad4-kube-api-access-gvrp4\") pod \"packageserver-d55dfcdfc-n22nk\" (UID: \"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459383 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78d9587b-0bca-4439-8518-f652be926d70-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g9j6m\" (UID: \"78d9587b-0bca-4439-8518-f652be926d70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9j6m" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459416 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvzlq\" (UniqueName: \"kubernetes.io/projected/77d64c85-edec-42f6-9c3e-7bbbf04cc84e-kube-api-access-lvzlq\") pod \"openshift-config-operator-7777fb866f-sh7dc\" (UID: \"77d64c85-edec-42f6-9c3e-7bbbf04cc84e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459443 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6-config\") pod \"console-operator-58897d9998-nwhhr\" (UID: \"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6\") " pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459471 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf8pl\" (UniqueName: \"kubernetes.io/projected/2020a25e-c390-4919-8f4f-3472caca4c14-kube-api-access-gf8pl\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459500 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459523 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459549 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-trusted-ca-bundle\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459582 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d55vr\" (UniqueName: \"kubernetes.io/projected/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-kube-api-access-d55vr\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459608 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/590153a9-670a-4443-853c-e1bd935d57c3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rt422\" (UID: \"590153a9-670a-4443-853c-e1bd935d57c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459637 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459661 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-audit-policies\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459689 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.459716 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/590153a9-670a-4443-853c-e1bd935d57c3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rt422\" (UID: \"590153a9-670a-4443-853c-e1bd935d57c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.460430 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.460669 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.460678 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.460741 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2020a25e-c390-4919-8f4f-3472caca4c14-etcd-client\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.461936 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-audit-policies\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.462122 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d1dbe4f-c837-4206-aa33-8ad657a3f4e5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8pjf2\" (UID: \"5d1dbe4f-c837-4206-aa33-8ad657a3f4e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.462672 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-etcd-client\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.462722 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6-config\") pod \"console-operator-58897d9998-nwhhr\" (UID: \"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6\") " pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.462735 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.463079 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6-serving-cert\") pod \"console-operator-58897d9998-nwhhr\" (UID: \"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6\") " pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.463230 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.463332 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.463595 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-trusted-ca-bundle\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.464358 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8f0888-39cd-4813-8f5b-ba725fb15ee5-serving-cert\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.464410 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.465320 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.465466 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2020a25e-c390-4919-8f4f-3472caca4c14-serving-cert\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.465869 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.467171 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-znqzs"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.468204 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.470259 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q5ch5"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.471915 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.475225 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-s62bx"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.476375 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s62bx" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.476882 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s62bx"] Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.483465 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.484877 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.505083 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.525350 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.545154 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561226 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9b3e0393-fe92-4ca0-b9ce-85bceddbfad4-tmpfs\") pod \"packageserver-d55dfcdfc-n22nk\" (UID: \"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561339 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd22a22c-e9a1-4ca8-991c-100337423ece-serving-cert\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561378 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmxxz\" (UniqueName: \"kubernetes.io/projected/c78b2b58-b81d-4a67-b879-9812138fdd29-kube-api-access-vmxxz\") pod \"olm-operator-6b444d44fb-x7jpf\" (UID: \"c78b2b58-b81d-4a67-b879-9812138fdd29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561410 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b3e0393-fe92-4ca0-b9ce-85bceddbfad4-webhook-cert\") pod \"packageserver-d55dfcdfc-n22nk\" (UID: \"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561437 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fd22a22c-e9a1-4ca8-991c-100337423ece-etcd-ca\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561474 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c78b2b58-b81d-4a67-b879-9812138fdd29-srv-cert\") pod \"olm-operator-6b444d44fb-x7jpf\" (UID: \"c78b2b58-b81d-4a67-b879-9812138fdd29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561517 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91787165-ff9b-4d6c-8d81-a6efc4bdb19a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bsjj4\" (UID: \"91787165-ff9b-4d6c-8d81-a6efc4bdb19a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561558 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd22a22c-e9a1-4ca8-991c-100337423ece-config\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561588 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/325978fb-e819-4d17-af79-821ee41da615-srv-cert\") pod \"catalog-operator-68c6474976-p4mnm\" (UID: \"325978fb-e819-4d17-af79-821ee41da615\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561616 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvsp5\" (UniqueName: \"kubernetes.io/projected/325978fb-e819-4d17-af79-821ee41da615-kube-api-access-cvsp5\") pod \"catalog-operator-68c6474976-p4mnm\" (UID: \"325978fb-e819-4d17-af79-821ee41da615\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561676 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-service-ca-bundle\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561710 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9b3e0393-fe92-4ca0-b9ce-85bceddbfad4-apiservice-cert\") pod \"packageserver-d55dfcdfc-n22nk\" (UID: \"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561745 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c78b2b58-b81d-4a67-b879-9812138fdd29-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x7jpf\" (UID: \"c78b2b58-b81d-4a67-b879-9812138fdd29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561772 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/375b62a4-7c53-4d18-8bf9-f9378321a8de-trusted-ca\") pod \"ingress-operator-5b745b69d9-dnfn7\" (UID: \"375b62a4-7c53-4d18-8bf9-f9378321a8de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561842 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf9pf\" (UniqueName: \"kubernetes.io/projected/1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107-kube-api-access-cf9pf\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzxkd\" (UID: \"1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561888 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/325978fb-e819-4d17-af79-821ee41da615-profile-collector-cert\") pod \"catalog-operator-68c6474976-p4mnm\" (UID: \"325978fb-e819-4d17-af79-821ee41da615\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.561840 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9b3e0393-fe92-4ca0-b9ce-85bceddbfad4-tmpfs\") pod \"packageserver-d55dfcdfc-n22nk\" (UID: \"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.562765 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd22a22c-e9a1-4ca8-991c-100337423ece-config\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.562866 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fd22a22c-e9a1-4ca8-991c-100337423ece-etcd-ca\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.562948 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/590153a9-670a-4443-853c-e1bd935d57c3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rt422\" (UID: \"590153a9-670a-4443-853c-e1bd935d57c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.563080 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd22a22c-e9a1-4ca8-991c-100337423ece-etcd-client\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.563334 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zj2bn\" (UID: \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.563402 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69273f1-a871-402d-bb09-2150ee1134b1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-htq9g\" (UID: \"e69273f1-a871-402d-bb09-2150ee1134b1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.563675 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxwbv\" (UniqueName: \"kubernetes.io/projected/91787165-ff9b-4d6c-8d81-a6efc4bdb19a-kube-api-access-bxwbv\") pod \"cluster-image-registry-operator-dc59b4c8b-bsjj4\" (UID: \"91787165-ff9b-4d6c-8d81-a6efc4bdb19a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.563746 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scpz7\" (UniqueName: \"kubernetes.io/projected/29cdc67d-6d2a-44b2-bd31-3634aff7f52e-kube-api-access-scpz7\") pod \"machine-api-operator-5694c8668f-xdrhr\" (UID: \"29cdc67d-6d2a-44b2-bd31-3634aff7f52e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.563886 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4s5l\" (UniqueName: \"kubernetes.io/projected/fd22a22c-e9a1-4ca8-991c-100337423ece-kube-api-access-w4s5l\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.563949 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29cdc67d-6d2a-44b2-bd31-3634aff7f52e-config\") pod \"machine-api-operator-5694c8668f-xdrhr\" (UID: \"29cdc67d-6d2a-44b2-bd31-3634aff7f52e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.563977 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/29cdc67d-6d2a-44b2-bd31-3634aff7f52e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xdrhr\" (UID: \"29cdc67d-6d2a-44b2-bd31-3634aff7f52e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.564895 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.564907 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd22a22c-e9a1-4ca8-991c-100337423ece-etcd-service-ca\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.564860 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29cdc67d-6d2a-44b2-bd31-3634aff7f52e-config\") pod \"machine-api-operator-5694c8668f-xdrhr\" (UID: \"29cdc67d-6d2a-44b2-bd31-3634aff7f52e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565079 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h52m\" (UniqueName: \"kubernetes.io/projected/797e5656-fa46-48f0-a336-40560b3da3a5-kube-api-access-7h52m\") pod \"dns-default-lmgsj\" (UID: \"797e5656-fa46-48f0-a336-40560b3da3a5\") " pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565127 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zj2bn\" (UID: \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565162 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf4nw\" (UniqueName: \"kubernetes.io/projected/375b62a4-7c53-4d18-8bf9-f9378321a8de-kube-api-access-bf4nw\") pod \"ingress-operator-5b745b69d9-dnfn7\" (UID: \"375b62a4-7c53-4d18-8bf9-f9378321a8de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565224 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmtsv\" (UniqueName: \"kubernetes.io/projected/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-kube-api-access-jmtsv\") pod \"marketplace-operator-79b997595-zj2bn\" (UID: \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565255 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9074f73d-a336-4a52-960b-b18e219d12a5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-x82fl\" (UID: \"9074f73d-a336-4a52-960b-b18e219d12a5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565293 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/375b62a4-7c53-4d18-8bf9-f9378321a8de-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dnfn7\" (UID: \"375b62a4-7c53-4d18-8bf9-f9378321a8de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565347 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-default-certificate\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565387 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-metrics-certs\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565432 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzxkd\" (UID: \"1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565465 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd22a22c-e9a1-4ca8-991c-100337423ece-etcd-service-ca\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565490 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797e5656-fa46-48f0-a336-40560b3da3a5-config-volume\") pod \"dns-default-lmgsj\" (UID: \"797e5656-fa46-48f0-a336-40560b3da3a5\") " pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565250 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd22a22c-e9a1-4ca8-991c-100337423ece-serving-cert\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565523 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzxkd\" (UID: \"1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565657 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvrp4\" (UniqueName: \"kubernetes.io/projected/9b3e0393-fe92-4ca0-b9ce-85bceddbfad4-kube-api-access-gvrp4\") pod \"packageserver-d55dfcdfc-n22nk\" (UID: \"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565718 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78d9587b-0bca-4439-8518-f652be926d70-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g9j6m\" (UID: \"78d9587b-0bca-4439-8518-f652be926d70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9j6m" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565767 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/590153a9-670a-4443-853c-e1bd935d57c3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rt422\" (UID: \"590153a9-670a-4443-853c-e1bd935d57c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565828 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/590153a9-670a-4443-853c-e1bd935d57c3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rt422\" (UID: \"590153a9-670a-4443-853c-e1bd935d57c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565857 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91787165-ff9b-4d6c-8d81-a6efc4bdb19a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bsjj4\" (UID: \"91787165-ff9b-4d6c-8d81-a6efc4bdb19a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565886 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/797e5656-fa46-48f0-a336-40560b3da3a5-metrics-tls\") pod \"dns-default-lmgsj\" (UID: \"797e5656-fa46-48f0-a336-40560b3da3a5\") " pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565910 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/29cdc67d-6d2a-44b2-bd31-3634aff7f52e-images\") pod \"machine-api-operator-5694c8668f-xdrhr\" (UID: \"29cdc67d-6d2a-44b2-bd31-3634aff7f52e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565935 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p2sk\" (UniqueName: \"kubernetes.io/projected/9074f73d-a336-4a52-960b-b18e219d12a5-kube-api-access-6p2sk\") pod \"package-server-manager-789f6589d5-x82fl\" (UID: \"9074f73d-a336-4a52-960b-b18e219d12a5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565975 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e69273f1-a871-402d-bb09-2150ee1134b1-config\") pod \"kube-apiserver-operator-766d6c64bb-htq9g\" (UID: \"e69273f1-a871-402d-bb09-2150ee1134b1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.565998 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9vqg\" (UniqueName: \"kubernetes.io/projected/78d9587b-0bca-4439-8518-f652be926d70-kube-api-access-t9vqg\") pod \"multus-admission-controller-857f4d67dd-g9j6m\" (UID: \"78d9587b-0bca-4439-8518-f652be926d70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9j6m" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.566026 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjp6t\" (UniqueName: \"kubernetes.io/projected/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-kube-api-access-gjp6t\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.566055 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp22d\" (UniqueName: \"kubernetes.io/projected/c121048e-9df5-412a-9d86-e7cf8a59d0e1-kube-api-access-tp22d\") pod \"downloads-7954f5f757-v4fq8\" (UID: \"c121048e-9df5-412a-9d86-e7cf8a59d0e1\") " pod="openshift-console/downloads-7954f5f757-v4fq8" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.566084 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91787165-ff9b-4d6c-8d81-a6efc4bdb19a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bsjj4\" (UID: \"91787165-ff9b-4d6c-8d81-a6efc4bdb19a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.566161 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e69273f1-a871-402d-bb09-2150ee1134b1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-htq9g\" (UID: \"e69273f1-a871-402d-bb09-2150ee1134b1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.566192 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-stats-auth\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.566219 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/375b62a4-7c53-4d18-8bf9-f9378321a8de-metrics-tls\") pod \"ingress-operator-5b745b69d9-dnfn7\" (UID: \"375b62a4-7c53-4d18-8bf9-f9378321a8de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.566510 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzxkd\" (UID: \"1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.567504 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69273f1-a871-402d-bb09-2150ee1134b1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-htq9g\" (UID: \"e69273f1-a871-402d-bb09-2150ee1134b1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.567618 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e69273f1-a871-402d-bb09-2150ee1134b1-config\") pod \"kube-apiserver-operator-766d6c64bb-htq9g\" (UID: \"e69273f1-a871-402d-bb09-2150ee1134b1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.567678 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd22a22c-e9a1-4ca8-991c-100337423ece-etcd-client\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.568002 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/29cdc67d-6d2a-44b2-bd31-3634aff7f52e-images\") pod \"machine-api-operator-5694c8668f-xdrhr\" (UID: \"29cdc67d-6d2a-44b2-bd31-3634aff7f52e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.568658 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91787165-ff9b-4d6c-8d81-a6efc4bdb19a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bsjj4\" (UID: \"91787165-ff9b-4d6c-8d81-a6efc4bdb19a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.569502 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/29cdc67d-6d2a-44b2-bd31-3634aff7f52e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xdrhr\" (UID: \"29cdc67d-6d2a-44b2-bd31-3634aff7f52e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.571568 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzxkd\" (UID: \"1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.572225 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91787165-ff9b-4d6c-8d81-a6efc4bdb19a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bsjj4\" (UID: \"91787165-ff9b-4d6c-8d81-a6efc4bdb19a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.584458 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.606161 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.625497 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.644856 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.665182 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.686493 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.711322 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.713745 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/375b62a4-7c53-4d18-8bf9-f9378321a8de-trusted-ca\") pod \"ingress-operator-5b745b69d9-dnfn7\" (UID: \"375b62a4-7c53-4d18-8bf9-f9378321a8de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.725198 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.745727 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.750116 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/375b62a4-7c53-4d18-8bf9-f9378321a8de-metrics-tls\") pod \"ingress-operator-5b745b69d9-dnfn7\" (UID: \"375b62a4-7c53-4d18-8bf9-f9378321a8de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.764997 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.785530 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.805123 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.808172 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/590153a9-670a-4443-853c-e1bd935d57c3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rt422\" (UID: \"590153a9-670a-4443-853c-e1bd935d57c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.825373 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.844999 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.864773 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.871484 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/590153a9-670a-4443-853c-e1bd935d57c3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rt422\" (UID: \"590153a9-670a-4443-853c-e1bd935d57c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.886090 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.905835 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.926065 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.939603 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-metrics-certs\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.945792 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.964577 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.970364 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-default-certificate\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.985610 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 01 09:34:04 crc kubenswrapper[4933]: I1201 09:34:04.990990 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-stats-auth\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.004690 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.014285 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-service-ca-bundle\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.025070 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.045120 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.085754 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.103859 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.125517 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.144799 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.164997 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.171891 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78d9587b-0bca-4439-8518-f652be926d70-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g9j6m\" (UID: \"78d9587b-0bca-4439-8518-f652be926d70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9j6m" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.196426 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.199003 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zj2bn\" (UID: \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.204987 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.225006 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.229440 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zj2bn\" (UID: \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.245948 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.266166 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.285734 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.305467 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.318870 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/325978fb-e819-4d17-af79-821ee41da615-srv-cert\") pod \"catalog-operator-68c6474976-p4mnm\" (UID: \"325978fb-e819-4d17-af79-821ee41da615\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.323064 4933 request.go:700] Waited for 1.009016701s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.325906 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.345837 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.365644 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.376955 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/325978fb-e819-4d17-af79-821ee41da615-profile-collector-cert\") pod \"catalog-operator-68c6474976-p4mnm\" (UID: \"325978fb-e819-4d17-af79-821ee41da615\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.377273 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c78b2b58-b81d-4a67-b879-9812138fdd29-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x7jpf\" (UID: \"c78b2b58-b81d-4a67-b879-9812138fdd29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.385798 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.398100 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c78b2b58-b81d-4a67-b879-9812138fdd29-srv-cert\") pod \"olm-operator-6b444d44fb-x7jpf\" (UID: \"c78b2b58-b81d-4a67-b879-9812138fdd29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.405853 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.427166 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.445909 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.466239 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.485208 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.505004 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.524393 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.544168 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.555519 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9b3e0393-fe92-4ca0-b9ce-85bceddbfad4-apiservice-cert\") pod \"packageserver-d55dfcdfc-n22nk\" (UID: \"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.556094 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b3e0393-fe92-4ca0-b9ce-85bceddbfad4-webhook-cert\") pod \"packageserver-d55dfcdfc-n22nk\" (UID: \"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:05 crc kubenswrapper[4933]: E1201 09:34:05.565613 4933 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 09:34:05 crc kubenswrapper[4933]: E1201 09:34:05.565707 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9074f73d-a336-4a52-960b-b18e219d12a5-package-server-manager-serving-cert podName:9074f73d-a336-4a52-960b-b18e219d12a5 nodeName:}" failed. No retries permitted until 2025-12-01 09:34:06.065685875 +0000 UTC m=+136.707409500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/9074f73d-a336-4a52-960b-b18e219d12a5-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-x82fl" (UID: "9074f73d-a336-4a52-960b-b18e219d12a5") : failed to sync secret cache: timed out waiting for the condition Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.565624 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 01 09:34:05 crc kubenswrapper[4933]: E1201 09:34:05.566559 4933 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Dec 01 09:34:05 crc kubenswrapper[4933]: E1201 09:34:05.566746 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/797e5656-fa46-48f0-a336-40560b3da3a5-metrics-tls podName:797e5656-fa46-48f0-a336-40560b3da3a5 nodeName:}" failed. No retries permitted until 2025-12-01 09:34:06.066725941 +0000 UTC m=+136.708449556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/797e5656-fa46-48f0-a336-40560b3da3a5-metrics-tls") pod "dns-default-lmgsj" (UID: "797e5656-fa46-48f0-a336-40560b3da3a5") : failed to sync secret cache: timed out waiting for the condition Dec 01 09:34:05 crc kubenswrapper[4933]: E1201 09:34:05.567002 4933 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Dec 01 09:34:05 crc kubenswrapper[4933]: E1201 09:34:05.567066 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/797e5656-fa46-48f0-a336-40560b3da3a5-config-volume podName:797e5656-fa46-48f0-a336-40560b3da3a5 nodeName:}" failed. No retries permitted until 2025-12-01 09:34:06.067055269 +0000 UTC m=+136.708779104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/797e5656-fa46-48f0-a336-40560b3da3a5-config-volume") pod "dns-default-lmgsj" (UID: "797e5656-fa46-48f0-a336-40560b3da3a5") : failed to sync configmap cache: timed out waiting for the condition Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.584873 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.603693 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.624828 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.644077 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.664156 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.686390 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.705844 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.724932 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.765229 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.784903 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.805289 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.825201 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.845952 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.865119 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.884452 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.904496 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.925983 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.945658 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.965882 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 01 09:34:05 crc kubenswrapper[4933]: I1201 09:34:05.984345 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.006608 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.025651 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.045418 4933 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.066158 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.088432 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9074f73d-a336-4a52-960b-b18e219d12a5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-x82fl\" (UID: \"9074f73d-a336-4a52-960b-b18e219d12a5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.088484 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797e5656-fa46-48f0-a336-40560b3da3a5-config-volume\") pod \"dns-default-lmgsj\" (UID: \"797e5656-fa46-48f0-a336-40560b3da3a5\") " pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.088532 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/797e5656-fa46-48f0-a336-40560b3da3a5-metrics-tls\") pod \"dns-default-lmgsj\" (UID: \"797e5656-fa46-48f0-a336-40560b3da3a5\") " pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.089266 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797e5656-fa46-48f0-a336-40560b3da3a5-config-volume\") pod \"dns-default-lmgsj\" (UID: \"797e5656-fa46-48f0-a336-40560b3da3a5\") " pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.091778 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/797e5656-fa46-48f0-a336-40560b3da3a5-metrics-tls\") pod \"dns-default-lmgsj\" (UID: \"797e5656-fa46-48f0-a336-40560b3da3a5\") " pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.092611 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9074f73d-a336-4a52-960b-b18e219d12a5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-x82fl\" (UID: \"9074f73d-a336-4a52-960b-b18e219d12a5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.105185 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4z57\" (UniqueName: \"kubernetes.io/projected/a594f887-35b0-4757-9522-e22b68536bca-kube-api-access-l4z57\") pod \"machine-approver-56656f9798-qbr9b\" (UID: \"a594f887-35b0-4757-9522-e22b68536bca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.131826 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh7t7\" (UniqueName: \"kubernetes.io/projected/45bbe65f-8e73-4b73-863c-15db667e3e22-kube-api-access-lh7t7\") pod \"console-f9d7485db-x74qn\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.143441 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.144144 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjm58\" (UniqueName: \"kubernetes.io/projected/1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6-kube-api-access-hjm58\") pod \"console-operator-58897d9998-nwhhr\" (UID: \"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6\") " pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.162252 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmk2b\" (UniqueName: \"kubernetes.io/projected/da8f0888-39cd-4813-8f5b-ba725fb15ee5-kube-api-access-kmk2b\") pod \"controller-manager-879f6c89f-v9qqn\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.183134 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh2bz\" (UniqueName: \"kubernetes.io/projected/b805d945-8eed-48d3-9547-560266e5dfb1-kube-api-access-wh2bz\") pod \"oauth-openshift-558db77b4-4wqht\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.200884 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjpbt\" (UniqueName: \"kubernetes.io/projected/54960e89-4e49-4c21-bea4-cc46fcf8edba-kube-api-access-tjpbt\") pod \"route-controller-manager-6576b87f9c-sr8zs\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.220007 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxqcs\" (UniqueName: \"kubernetes.io/projected/46ceb35e-f316-481d-acdc-c61617f13e5f-kube-api-access-sxqcs\") pod \"authentication-operator-69f744f599-hpmth\" (UID: \"46ceb35e-f316-481d-acdc-c61617f13e5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.225942 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.245719 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.266441 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.269831 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.273604 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.301856 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.313672 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvzlq\" (UniqueName: \"kubernetes.io/projected/77d64c85-edec-42f6-9c3e-7bbbf04cc84e-kube-api-access-lvzlq\") pod \"openshift-config-operator-7777fb866f-sh7dc\" (UID: \"77d64c85-edec-42f6-9c3e-7bbbf04cc84e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.321491 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d55vr\" (UniqueName: \"kubernetes.io/projected/39d14064-58a6-4a37-9a8f-2e3fdf93c46a-kube-api-access-d55vr\") pod \"apiserver-7bbb656c7d-6qdvh\" (UID: \"39d14064-58a6-4a37-9a8f-2e3fdf93c46a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.327296 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.340831 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf8pl\" (UniqueName: \"kubernetes.io/projected/2020a25e-c390-4919-8f4f-3472caca4c14-kube-api-access-gf8pl\") pod \"apiserver-76f77b778f-dtcjv\" (UID: \"2020a25e-c390-4919-8f4f-3472caca4c14\") " pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.343475 4933 request.go:700] Waited for 1.882808783s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.363242 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x74qn"] Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.364046 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.365215 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mckl6\" (UniqueName: \"kubernetes.io/projected/5d1dbe4f-c837-4206-aa33-8ad657a3f4e5-kube-api-access-mckl6\") pod \"openshift-apiserver-operator-796bbdcf4f-8pjf2\" (UID: \"5d1dbe4f-c837-4206-aa33-8ad657a3f4e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.365734 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.380596 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.384902 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 01 09:34:06 crc kubenswrapper[4933]: W1201 09:34:06.385475 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45bbe65f_8e73_4b73_863c_15db667e3e22.slice/crio-3b84cc9b2d9f4f7e63cde579d5adc5136c8909363ebf87ae239d6380a4d9f5b8 WatchSource:0}: Error finding container 3b84cc9b2d9f4f7e63cde579d5adc5136c8909363ebf87ae239d6380a4d9f5b8: Status 404 returned error can't find the container with id 3b84cc9b2d9f4f7e63cde579d5adc5136c8909363ebf87ae239d6380a4d9f5b8 Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.405705 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.415107 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.421922 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.424727 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.434124 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.465924 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmxxz\" (UniqueName: \"kubernetes.io/projected/c78b2b58-b81d-4a67-b879-9812138fdd29-kube-api-access-vmxxz\") pod \"olm-operator-6b444d44fb-x7jpf\" (UID: \"c78b2b58-b81d-4a67-b879-9812138fdd29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.487332 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91787165-ff9b-4d6c-8d81-a6efc4bdb19a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bsjj4\" (UID: \"91787165-ff9b-4d6c-8d81-a6efc4bdb19a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.511133 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvsp5\" (UniqueName: \"kubernetes.io/projected/325978fb-e819-4d17-af79-821ee41da615-kube-api-access-cvsp5\") pod \"catalog-operator-68c6474976-p4mnm\" (UID: \"325978fb-e819-4d17-af79-821ee41da615\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.529987 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs"] Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.542768 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf9pf\" (UniqueName: \"kubernetes.io/projected/1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107-kube-api-access-cf9pf\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzxkd\" (UID: \"1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.548482 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/590153a9-670a-4443-853c-e1bd935d57c3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rt422\" (UID: \"590153a9-670a-4443-853c-e1bd935d57c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.566547 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v9qqn"] Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.572026 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxwbv\" (UniqueName: \"kubernetes.io/projected/91787165-ff9b-4d6c-8d81-a6efc4bdb19a-kube-api-access-bxwbv\") pod \"cluster-image-registry-operator-dc59b4c8b-bsjj4\" (UID: \"91787165-ff9b-4d6c-8d81-a6efc4bdb19a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.588480 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.590862 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scpz7\" (UniqueName: \"kubernetes.io/projected/29cdc67d-6d2a-44b2-bd31-3634aff7f52e-kube-api-access-scpz7\") pod \"machine-api-operator-5694c8668f-xdrhr\" (UID: \"29cdc67d-6d2a-44b2-bd31-3634aff7f52e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.592420 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" event={"ID":"a594f887-35b0-4757-9522-e22b68536bca","Type":"ContainerStarted","Data":"c55972f31473957b5759f46ac50cd908dc3e6e6694f8ac94b379cfb792abfe5e"} Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.593885 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hpmth"] Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.597606 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x74qn" event={"ID":"45bbe65f-8e73-4b73-863c-15db667e3e22","Type":"ContainerStarted","Data":"3b84cc9b2d9f4f7e63cde579d5adc5136c8909363ebf87ae239d6380a4d9f5b8"} Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.599067 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" event={"ID":"54960e89-4e49-4c21-bea4-cc46fcf8edba","Type":"ContainerStarted","Data":"5f548bef4662c0b0b936093c9136f583950edcc8d1ca2902794c964b1fcc97e9"} Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.601224 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4s5l\" (UniqueName: \"kubernetes.io/projected/fd22a22c-e9a1-4ca8-991c-100337423ece-kube-api-access-w4s5l\") pod \"etcd-operator-b45778765-vwm48\" (UID: \"fd22a22c-e9a1-4ca8-991c-100337423ece\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.607855 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.619085 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.640321 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.662003 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf4nw\" (UniqueName: \"kubernetes.io/projected/375b62a4-7c53-4d18-8bf9-f9378321a8de-kube-api-access-bf4nw\") pod \"ingress-operator-5b745b69d9-dnfn7\" (UID: \"375b62a4-7c53-4d18-8bf9-f9378321a8de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.667290 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmtsv\" (UniqueName: \"kubernetes.io/projected/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-kube-api-access-jmtsv\") pod \"marketplace-operator-79b997595-zj2bn\" (UID: \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.680605 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.687677 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/375b62a4-7c53-4d18-8bf9-f9378321a8de-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dnfn7\" (UID: \"375b62a4-7c53-4d18-8bf9-f9378321a8de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.688134 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.695662 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.708563 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9vqg\" (UniqueName: \"kubernetes.io/projected/78d9587b-0bca-4439-8518-f652be926d70-kube-api-access-t9vqg\") pod \"multus-admission-controller-857f4d67dd-g9j6m\" (UID: \"78d9587b-0bca-4439-8518-f652be926d70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9j6m" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.709819 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4wqht"] Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.726120 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp22d\" (UniqueName: \"kubernetes.io/projected/c121048e-9df5-412a-9d86-e7cf8a59d0e1-kube-api-access-tp22d\") pod \"downloads-7954f5f757-v4fq8\" (UID: \"c121048e-9df5-412a-9d86-e7cf8a59d0e1\") " pod="openshift-console/downloads-7954f5f757-v4fq8" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.728481 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dtcjv"] Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.741129 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjp6t\" (UniqueName: \"kubernetes.io/projected/e360f8b0-0f1b-4a9b-9aed-cd0a8976482a-kube-api-access-gjp6t\") pod \"router-default-5444994796-wnvhn\" (UID: \"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a\") " pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.759993 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvrp4\" (UniqueName: \"kubernetes.io/projected/9b3e0393-fe92-4ca0-b9ce-85bceddbfad4-kube-api-access-gvrp4\") pod \"packageserver-d55dfcdfc-n22nk\" (UID: \"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.764758 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.779918 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e69273f1-a871-402d-bb09-2150ee1134b1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-htq9g\" (UID: \"e69273f1-a871-402d-bb09-2150ee1134b1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.785113 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-v4fq8" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.800649 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p2sk\" (UniqueName: \"kubernetes.io/projected/9074f73d-a336-4a52-960b-b18e219d12a5-kube-api-access-6p2sk\") pod \"package-server-manager-789f6589d5-x82fl\" (UID: \"9074f73d-a336-4a52-960b-b18e219d12a5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.972372 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2"] Dec 01 09:34:06 crc kubenswrapper[4933]: I1201 09:34:06.974396 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nwhhr"] Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.120898 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h52m\" (UniqueName: \"kubernetes.io/projected/797e5656-fa46-48f0-a336-40560b3da3a5-kube-api-access-7h52m\") pod \"dns-default-lmgsj\" (UID: \"797e5656-fa46-48f0-a336-40560b3da3a5\") " pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.121050 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.121076 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.121173 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.121200 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.121631 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9j6m" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.121810 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.121827 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-bound-sa-token\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.121879 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/123185e0-6f42-4a97-8107-c1e8a91d0ea9-trusted-ca\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.121932 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.122110 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/123185e0-6f42-4a97-8107-c1e8a91d0ea9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.122238 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:07 crc kubenswrapper[4933]: E1201 09:34:07.122438 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:07.622412045 +0000 UTC m=+138.264135850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.122486 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.124150 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/123185e0-6f42-4a97-8107-c1e8a91d0ea9-registry-certificates\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.124547 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dcdv\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-kube-api-access-6dcdv\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.125695 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-registry-tls\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.125760 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/123185e0-6f42-4a97-8107-c1e8a91d0ea9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: W1201 09:34:07.137202 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2020a25e_c390_4919_8f4f_3472caca4c14.slice/crio-a4a3c16a31f17346befa7d5331c4f40fd53684cb32fd9f2a2858b13640a01ff7 WatchSource:0}: Error finding container a4a3c16a31f17346befa7d5331c4f40fd53684cb32fd9f2a2858b13640a01ff7: Status 404 returned error can't find the container with id a4a3c16a31f17346befa7d5331c4f40fd53684cb32fd9f2a2858b13640a01ff7 Dec 01 09:34:07 crc kubenswrapper[4933]: W1201 09:34:07.139629 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d1dbe4f_c837_4206_aa33_8ad657a3f4e5.slice/crio-47dd74ff0a0a9b1f44c592d45e3bca3ec06dd9a7d68f59fd035a23cef8f75e60 WatchSource:0}: Error finding container 47dd74ff0a0a9b1f44c592d45e3bca3ec06dd9a7d68f59fd035a23cef8f75e60: Status 404 returned error can't find the container with id 47dd74ff0a0a9b1f44c592d45e3bca3ec06dd9a7d68f59fd035a23cef8f75e60 Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.227655 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.227920 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9k99\" (UniqueName: \"kubernetes.io/projected/a398625d-22e4-4bd7-a1e3-5231df797e36-kube-api-access-c9k99\") pod \"machine-config-operator-74547568cd-mjkjp\" (UID: \"a398625d-22e4-4bd7-a1e3-5231df797e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.227949 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqgl7\" (UniqueName: \"kubernetes.io/projected/d24a7072-f439-4db8-8de8-216470593141-kube-api-access-wqgl7\") pod \"ingress-canary-s62bx\" (UID: \"d24a7072-f439-4db8-8de8-216470593141\") " pod="openshift-ingress-canary/ingress-canary-s62bx" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.227973 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-plugins-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228036 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c10447e-0359-4448-9e95-f4952176901c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n42jc\" (UID: \"1c10447e-0359-4448-9e95-f4952176901c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228070 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d7ca308-276f-4775-8828-abee226710b6-metrics-tls\") pod \"dns-operator-744455d44c-sncl8\" (UID: \"9d7ca308-276f-4775-8828-abee226710b6\") " pod="openshift-dns-operator/dns-operator-744455d44c-sncl8" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228094 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/df074746-e6a5-490c-86e9-4e4c969ca5fe-node-bootstrap-token\") pod \"machine-config-server-t76mv\" (UID: \"df074746-e6a5-490c-86e9-4e4c969ca5fe\") " pod="openshift-machine-config-operator/machine-config-server-t76mv" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228110 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cf4d910-9922-451e-a6b6-1fa833027e5d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sfb5s\" (UID: \"8cf4d910-9922-451e-a6b6-1fa833027e5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228142 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l6nn\" (UniqueName: \"kubernetes.io/projected/1c10447e-0359-4448-9e95-f4952176901c-kube-api-access-9l6nn\") pod \"kube-storage-version-migrator-operator-b67b599dd-n42jc\" (UID: \"1c10447e-0359-4448-9e95-f4952176901c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228180 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj8r9\" (UniqueName: \"kubernetes.io/projected/a7bd5924-9a3f-43cf-99b1-2d5d20975f81-kube-api-access-cj8r9\") pod \"control-plane-machine-set-operator-78cbb6b69f-6dwg7\" (UID: \"a7bd5924-9a3f-43cf-99b1-2d5d20975f81\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228207 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/123185e0-6f42-4a97-8107-c1e8a91d0ea9-trusted-ca\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228233 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-bound-sa-token\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228380 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8pqc\" (UniqueName: \"kubernetes.io/projected/9d7ca308-276f-4775-8828-abee226710b6-kube-api-access-c8pqc\") pod \"dns-operator-744455d44c-sncl8\" (UID: \"9d7ca308-276f-4775-8828-abee226710b6\") " pod="openshift-dns-operator/dns-operator-744455d44c-sncl8" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228407 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/df074746-e6a5-490c-86e9-4e4c969ca5fe-certs\") pod \"machine-config-server-t76mv\" (UID: \"df074746-e6a5-490c-86e9-4e4c969ca5fe\") " pod="openshift-machine-config-operator/machine-config-server-t76mv" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228462 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv9w2\" (UniqueName: \"kubernetes.io/projected/a3422f33-b5ab-4658-86a0-c908efca7db9-kube-api-access-jv9w2\") pod \"collect-profiles-29409690-jvl8l\" (UID: \"a3422f33-b5ab-4658-86a0-c908efca7db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228492 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a398625d-22e4-4bd7-a1e3-5231df797e36-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mjkjp\" (UID: \"a398625d-22e4-4bd7-a1e3-5231df797e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228543 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/025dba90-61ee-4610-b166-737b0ef8b7b5-serving-cert\") pod \"service-ca-operator-777779d784-84fbz\" (UID: \"025dba90-61ee-4610-b166-737b0ef8b7b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228564 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d24a7072-f439-4db8-8de8-216470593141-cert\") pod \"ingress-canary-s62bx\" (UID: \"d24a7072-f439-4db8-8de8-216470593141\") " pod="openshift-ingress-canary/ingress-canary-s62bx" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228627 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/123185e0-6f42-4a97-8107-c1e8a91d0ea9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228739 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/62125939-842d-45b3-824f-a45038a3226a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tg9q9\" (UID: \"62125939-842d-45b3-824f-a45038a3226a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228763 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3422f33-b5ab-4658-86a0-c908efca7db9-secret-volume\") pod \"collect-profiles-29409690-jvl8l\" (UID: \"a3422f33-b5ab-4658-86a0-c908efca7db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228810 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cf4d910-9922-451e-a6b6-1fa833027e5d-config\") pod \"kube-controller-manager-operator-78b949d7b-sfb5s\" (UID: \"8cf4d910-9922-451e-a6b6-1fa833027e5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228860 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4193888f-65e7-4818-b3e4-0dc9d1a32ed4-proxy-tls\") pod \"machine-config-controller-84d6567774-6lzfq\" (UID: \"4193888f-65e7-4818-b3e4-0dc9d1a32ed4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228911 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4193888f-65e7-4818-b3e4-0dc9d1a32ed4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6lzfq\" (UID: \"4193888f-65e7-4818-b3e4-0dc9d1a32ed4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228935 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025dba90-61ee-4610-b166-737b0ef8b7b5-config\") pod \"service-ca-operator-777779d784-84fbz\" (UID: \"025dba90-61ee-4610-b166-737b0ef8b7b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.228974 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqncj\" (UniqueName: \"kubernetes.io/projected/4193888f-65e7-4818-b3e4-0dc9d1a32ed4-kube-api-access-zqncj\") pod \"machine-config-controller-84d6567774-6lzfq\" (UID: \"4193888f-65e7-4818-b3e4-0dc9d1a32ed4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229023 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a398625d-22e4-4bd7-a1e3-5231df797e36-images\") pod \"machine-config-operator-74547568cd-mjkjp\" (UID: \"a398625d-22e4-4bd7-a1e3-5231df797e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229098 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/123185e0-6f42-4a97-8107-c1e8a91d0ea9-registry-certificates\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229125 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-registration-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229146 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crs62\" (UniqueName: \"kubernetes.io/projected/a23201bd-20d0-4ba3-92c5-954403c260f9-kube-api-access-crs62\") pod \"service-ca-9c57cc56f-2clwd\" (UID: \"a23201bd-20d0-4ba3-92c5-954403c260f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229220 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82g7d\" (UniqueName: \"kubernetes.io/projected/62125939-842d-45b3-824f-a45038a3226a-kube-api-access-82g7d\") pod \"cluster-samples-operator-665b6dd947-tg9q9\" (UID: \"62125939-842d-45b3-824f-a45038a3226a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229244 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a23201bd-20d0-4ba3-92c5-954403c260f9-signing-cabundle\") pod \"service-ca-9c57cc56f-2clwd\" (UID: \"a23201bd-20d0-4ba3-92c5-954403c260f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229274 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-socket-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229335 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxhx5\" (UniqueName: \"kubernetes.io/projected/df074746-e6a5-490c-86e9-4e4c969ca5fe-kube-api-access-hxhx5\") pod \"machine-config-server-t76mv\" (UID: \"df074746-e6a5-490c-86e9-4e4c969ca5fe\") " pod="openshift-machine-config-operator/machine-config-server-t76mv" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229375 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kckn\" (UniqueName: \"kubernetes.io/projected/3e4cfcb2-ffde-40ce-8934-6f63d1816e9b-kube-api-access-6kckn\") pod \"migrator-59844c95c7-vbvt7\" (UID: \"3e4cfcb2-ffde-40ce-8934-6f63d1816e9b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vbvt7" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229444 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c10447e-0359-4448-9e95-f4952176901c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n42jc\" (UID: \"1c10447e-0359-4448-9e95-f4952176901c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229470 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a398625d-22e4-4bd7-a1e3-5231df797e36-proxy-tls\") pod \"machine-config-operator-74547568cd-mjkjp\" (UID: \"a398625d-22e4-4bd7-a1e3-5231df797e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229507 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a23201bd-20d0-4ba3-92c5-954403c260f9-signing-key\") pod \"service-ca-9c57cc56f-2clwd\" (UID: \"a23201bd-20d0-4ba3-92c5-954403c260f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229531 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpdzg\" (UniqueName: \"kubernetes.io/projected/64c1704c-f0a1-4401-bc2d-46febb3ba534-kube-api-access-tpdzg\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.229556 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dcdv\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-kube-api-access-6dcdv\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: E1201 09:34:07.230119 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:07.730095942 +0000 UTC m=+138.371819747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.230216 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-csi-data-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.230266 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-mountpoint-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.230373 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-registry-tls\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.230425 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7bd5924-9a3f-43cf-99b1-2d5d20975f81-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6dwg7\" (UID: \"a7bd5924-9a3f-43cf-99b1-2d5d20975f81\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.230459 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx878\" (UniqueName: \"kubernetes.io/projected/025dba90-61ee-4610-b166-737b0ef8b7b5-kube-api-access-rx878\") pod \"service-ca-operator-777779d784-84fbz\" (UID: \"025dba90-61ee-4610-b166-737b0ef8b7b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.230491 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/123185e0-6f42-4a97-8107-c1e8a91d0ea9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.230510 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cf4d910-9922-451e-a6b6-1fa833027e5d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sfb5s\" (UID: \"8cf4d910-9922-451e-a6b6-1fa833027e5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.230528 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3422f33-b5ab-4658-86a0-c908efca7db9-config-volume\") pod \"collect-profiles-29409690-jvl8l\" (UID: \"a3422f33-b5ab-4658-86a0-c908efca7db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.258054 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/123185e0-6f42-4a97-8107-c1e8a91d0ea9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.258467 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/123185e0-6f42-4a97-8107-c1e8a91d0ea9-trusted-ca\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.264038 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-registry-tls\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.264337 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/123185e0-6f42-4a97-8107-c1e8a91d0ea9-registry-certificates\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.272790 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/123185e0-6f42-4a97-8107-c1e8a91d0ea9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.325759 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-bound-sa-token\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333226 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/df074746-e6a5-490c-86e9-4e4c969ca5fe-node-bootstrap-token\") pod \"machine-config-server-t76mv\" (UID: \"df074746-e6a5-490c-86e9-4e4c969ca5fe\") " pod="openshift-machine-config-operator/machine-config-server-t76mv" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333333 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cf4d910-9922-451e-a6b6-1fa833027e5d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sfb5s\" (UID: \"8cf4d910-9922-451e-a6b6-1fa833027e5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333364 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l6nn\" (UniqueName: \"kubernetes.io/projected/1c10447e-0359-4448-9e95-f4952176901c-kube-api-access-9l6nn\") pod \"kube-storage-version-migrator-operator-b67b599dd-n42jc\" (UID: \"1c10447e-0359-4448-9e95-f4952176901c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333418 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj8r9\" (UniqueName: \"kubernetes.io/projected/a7bd5924-9a3f-43cf-99b1-2d5d20975f81-kube-api-access-cj8r9\") pod \"control-plane-machine-set-operator-78cbb6b69f-6dwg7\" (UID: \"a7bd5924-9a3f-43cf-99b1-2d5d20975f81\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333460 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333491 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8pqc\" (UniqueName: \"kubernetes.io/projected/9d7ca308-276f-4775-8828-abee226710b6-kube-api-access-c8pqc\") pod \"dns-operator-744455d44c-sncl8\" (UID: \"9d7ca308-276f-4775-8828-abee226710b6\") " pod="openshift-dns-operator/dns-operator-744455d44c-sncl8" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333514 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/df074746-e6a5-490c-86e9-4e4c969ca5fe-certs\") pod \"machine-config-server-t76mv\" (UID: \"df074746-e6a5-490c-86e9-4e4c969ca5fe\") " pod="openshift-machine-config-operator/machine-config-server-t76mv" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333549 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv9w2\" (UniqueName: \"kubernetes.io/projected/a3422f33-b5ab-4658-86a0-c908efca7db9-kube-api-access-jv9w2\") pod \"collect-profiles-29409690-jvl8l\" (UID: \"a3422f33-b5ab-4658-86a0-c908efca7db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333582 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a398625d-22e4-4bd7-a1e3-5231df797e36-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mjkjp\" (UID: \"a398625d-22e4-4bd7-a1e3-5231df797e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333614 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/025dba90-61ee-4610-b166-737b0ef8b7b5-serving-cert\") pod \"service-ca-operator-777779d784-84fbz\" (UID: \"025dba90-61ee-4610-b166-737b0ef8b7b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333638 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d24a7072-f439-4db8-8de8-216470593141-cert\") pod \"ingress-canary-s62bx\" (UID: \"d24a7072-f439-4db8-8de8-216470593141\") " pod="openshift-ingress-canary/ingress-canary-s62bx" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333683 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/62125939-842d-45b3-824f-a45038a3226a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tg9q9\" (UID: \"62125939-842d-45b3-824f-a45038a3226a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333708 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3422f33-b5ab-4658-86a0-c908efca7db9-secret-volume\") pod \"collect-profiles-29409690-jvl8l\" (UID: \"a3422f33-b5ab-4658-86a0-c908efca7db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.333737 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cf4d910-9922-451e-a6b6-1fa833027e5d-config\") pod \"kube-controller-manager-operator-78b949d7b-sfb5s\" (UID: \"8cf4d910-9922-451e-a6b6-1fa833027e5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.337841 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4193888f-65e7-4818-b3e4-0dc9d1a32ed4-proxy-tls\") pod \"machine-config-controller-84d6567774-6lzfq\" (UID: \"4193888f-65e7-4818-b3e4-0dc9d1a32ed4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.337902 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4193888f-65e7-4818-b3e4-0dc9d1a32ed4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6lzfq\" (UID: \"4193888f-65e7-4818-b3e4-0dc9d1a32ed4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.337928 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025dba90-61ee-4610-b166-737b0ef8b7b5-config\") pod \"service-ca-operator-777779d784-84fbz\" (UID: \"025dba90-61ee-4610-b166-737b0ef8b7b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.337971 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a398625d-22e4-4bd7-a1e3-5231df797e36-images\") pod \"machine-config-operator-74547568cd-mjkjp\" (UID: \"a398625d-22e4-4bd7-a1e3-5231df797e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:07 crc kubenswrapper[4933]: E1201 09:34:07.338126 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:07.838108166 +0000 UTC m=+138.479831781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.340965 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/df074746-e6a5-490c-86e9-4e4c969ca5fe-certs\") pod \"machine-config-server-t76mv\" (UID: \"df074746-e6a5-490c-86e9-4e4c969ca5fe\") " pod="openshift-machine-config-operator/machine-config-server-t76mv" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.344617 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqncj\" (UniqueName: \"kubernetes.io/projected/4193888f-65e7-4818-b3e4-0dc9d1a32ed4-kube-api-access-zqncj\") pod \"machine-config-controller-84d6567774-6lzfq\" (UID: \"4193888f-65e7-4818-b3e4-0dc9d1a32ed4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.344682 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-registration-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.344719 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crs62\" (UniqueName: \"kubernetes.io/projected/a23201bd-20d0-4ba3-92c5-954403c260f9-kube-api-access-crs62\") pod \"service-ca-9c57cc56f-2clwd\" (UID: \"a23201bd-20d0-4ba3-92c5-954403c260f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.344757 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82g7d\" (UniqueName: \"kubernetes.io/projected/62125939-842d-45b3-824f-a45038a3226a-kube-api-access-82g7d\") pod \"cluster-samples-operator-665b6dd947-tg9q9\" (UID: \"62125939-842d-45b3-824f-a45038a3226a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.344785 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a23201bd-20d0-4ba3-92c5-954403c260f9-signing-cabundle\") pod \"service-ca-9c57cc56f-2clwd\" (UID: \"a23201bd-20d0-4ba3-92c5-954403c260f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.345013 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3422f33-b5ab-4658-86a0-c908efca7db9-secret-volume\") pod \"collect-profiles-29409690-jvl8l\" (UID: \"a3422f33-b5ab-4658-86a0-c908efca7db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.345503 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-registration-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.347382 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a23201bd-20d0-4ba3-92c5-954403c260f9-signing-cabundle\") pod \"service-ca-9c57cc56f-2clwd\" (UID: \"a23201bd-20d0-4ba3-92c5-954403c260f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.347807 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4193888f-65e7-4818-b3e4-0dc9d1a32ed4-proxy-tls\") pod \"machine-config-controller-84d6567774-6lzfq\" (UID: \"4193888f-65e7-4818-b3e4-0dc9d1a32ed4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.348142 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025dba90-61ee-4610-b166-737b0ef8b7b5-config\") pod \"service-ca-operator-777779d784-84fbz\" (UID: \"025dba90-61ee-4610-b166-737b0ef8b7b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.350965 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a398625d-22e4-4bd7-a1e3-5231df797e36-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mjkjp\" (UID: \"a398625d-22e4-4bd7-a1e3-5231df797e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.351976 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4193888f-65e7-4818-b3e4-0dc9d1a32ed4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6lzfq\" (UID: \"4193888f-65e7-4818-b3e4-0dc9d1a32ed4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353211 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d24a7072-f439-4db8-8de8-216470593141-cert\") pod \"ingress-canary-s62bx\" (UID: \"d24a7072-f439-4db8-8de8-216470593141\") " pod="openshift-ingress-canary/ingress-canary-s62bx" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353381 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-socket-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353487 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxhx5\" (UniqueName: \"kubernetes.io/projected/df074746-e6a5-490c-86e9-4e4c969ca5fe-kube-api-access-hxhx5\") pod \"machine-config-server-t76mv\" (UID: \"df074746-e6a5-490c-86e9-4e4c969ca5fe\") " pod="openshift-machine-config-operator/machine-config-server-t76mv" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353522 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kckn\" (UniqueName: \"kubernetes.io/projected/3e4cfcb2-ffde-40ce-8934-6f63d1816e9b-kube-api-access-6kckn\") pod \"migrator-59844c95c7-vbvt7\" (UID: \"3e4cfcb2-ffde-40ce-8934-6f63d1816e9b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vbvt7" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353539 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-socket-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353563 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c10447e-0359-4448-9e95-f4952176901c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n42jc\" (UID: \"1c10447e-0359-4448-9e95-f4952176901c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353591 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a398625d-22e4-4bd7-a1e3-5231df797e36-proxy-tls\") pod \"machine-config-operator-74547568cd-mjkjp\" (UID: \"a398625d-22e4-4bd7-a1e3-5231df797e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353693 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a23201bd-20d0-4ba3-92c5-954403c260f9-signing-key\") pod \"service-ca-9c57cc56f-2clwd\" (UID: \"a23201bd-20d0-4ba3-92c5-954403c260f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353722 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpdzg\" (UniqueName: \"kubernetes.io/projected/64c1704c-f0a1-4401-bc2d-46febb3ba534-kube-api-access-tpdzg\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353811 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-mountpoint-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353841 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-csi-data-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353922 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a398625d-22e4-4bd7-a1e3-5231df797e36-images\") pod \"machine-config-operator-74547568cd-mjkjp\" (UID: \"a398625d-22e4-4bd7-a1e3-5231df797e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353958 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7bd5924-9a3f-43cf-99b1-2d5d20975f81-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6dwg7\" (UID: \"a7bd5924-9a3f-43cf-99b1-2d5d20975f81\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.353994 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx878\" (UniqueName: \"kubernetes.io/projected/025dba90-61ee-4610-b166-737b0ef8b7b5-kube-api-access-rx878\") pod \"service-ca-operator-777779d784-84fbz\" (UID: \"025dba90-61ee-4610-b166-737b0ef8b7b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.354024 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cf4d910-9922-451e-a6b6-1fa833027e5d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sfb5s\" (UID: \"8cf4d910-9922-451e-a6b6-1fa833027e5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.354048 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3422f33-b5ab-4658-86a0-c908efca7db9-config-volume\") pod \"collect-profiles-29409690-jvl8l\" (UID: \"a3422f33-b5ab-4658-86a0-c908efca7db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.354123 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9k99\" (UniqueName: \"kubernetes.io/projected/a398625d-22e4-4bd7-a1e3-5231df797e36-kube-api-access-c9k99\") pod \"machine-config-operator-74547568cd-mjkjp\" (UID: \"a398625d-22e4-4bd7-a1e3-5231df797e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.354154 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqgl7\" (UniqueName: \"kubernetes.io/projected/d24a7072-f439-4db8-8de8-216470593141-kube-api-access-wqgl7\") pod \"ingress-canary-s62bx\" (UID: \"d24a7072-f439-4db8-8de8-216470593141\") " pod="openshift-ingress-canary/ingress-canary-s62bx" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.354178 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-plugins-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.354209 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c10447e-0359-4448-9e95-f4952176901c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n42jc\" (UID: \"1c10447e-0359-4448-9e95-f4952176901c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.354371 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d7ca308-276f-4775-8828-abee226710b6-metrics-tls\") pod \"dns-operator-744455d44c-sncl8\" (UID: \"9d7ca308-276f-4775-8828-abee226710b6\") " pod="openshift-dns-operator/dns-operator-744455d44c-sncl8" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.355597 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dcdv\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-kube-api-access-6dcdv\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.355785 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/df074746-e6a5-490c-86e9-4e4c969ca5fe-node-bootstrap-token\") pod \"machine-config-server-t76mv\" (UID: \"df074746-e6a5-490c-86e9-4e4c969ca5fe\") " pod="openshift-machine-config-operator/machine-config-server-t76mv" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.355790 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3422f33-b5ab-4658-86a0-c908efca7db9-config-volume\") pod \"collect-profiles-29409690-jvl8l\" (UID: \"a3422f33-b5ab-4658-86a0-c908efca7db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.356803 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cf4d910-9922-451e-a6b6-1fa833027e5d-config\") pod \"kube-controller-manager-operator-78b949d7b-sfb5s\" (UID: \"8cf4d910-9922-451e-a6b6-1fa833027e5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.357200 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-mountpoint-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.357349 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-plugins-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.357419 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/64c1704c-f0a1-4401-bc2d-46febb3ba534-csi-data-dir\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.359881 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a398625d-22e4-4bd7-a1e3-5231df797e36-proxy-tls\") pod \"machine-config-operator-74547568cd-mjkjp\" (UID: \"a398625d-22e4-4bd7-a1e3-5231df797e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.361667 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/025dba90-61ee-4610-b166-737b0ef8b7b5-serving-cert\") pod \"service-ca-operator-777779d784-84fbz\" (UID: \"025dba90-61ee-4610-b166-737b0ef8b7b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.361992 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c10447e-0359-4448-9e95-f4952176901c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n42jc\" (UID: \"1c10447e-0359-4448-9e95-f4952176901c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.362643 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/62125939-842d-45b3-824f-a45038a3226a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tg9q9\" (UID: \"62125939-842d-45b3-824f-a45038a3226a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.362794 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a23201bd-20d0-4ba3-92c5-954403c260f9-signing-key\") pod \"service-ca-9c57cc56f-2clwd\" (UID: \"a23201bd-20d0-4ba3-92c5-954403c260f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.368161 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cf4d910-9922-451e-a6b6-1fa833027e5d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sfb5s\" (UID: \"8cf4d910-9922-451e-a6b6-1fa833027e5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.368886 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7bd5924-9a3f-43cf-99b1-2d5d20975f81-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6dwg7\" (UID: \"a7bd5924-9a3f-43cf-99b1-2d5d20975f81\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.371631 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d7ca308-276f-4775-8828-abee226710b6-metrics-tls\") pod \"dns-operator-744455d44c-sncl8\" (UID: \"9d7ca308-276f-4775-8828-abee226710b6\") " pod="openshift-dns-operator/dns-operator-744455d44c-sncl8" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.376606 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c10447e-0359-4448-9e95-f4952176901c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n42jc\" (UID: \"1c10447e-0359-4448-9e95-f4952176901c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.394016 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cf4d910-9922-451e-a6b6-1fa833027e5d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sfb5s\" (UID: \"8cf4d910-9922-451e-a6b6-1fa833027e5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.404881 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc"] Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.407178 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l6nn\" (UniqueName: \"kubernetes.io/projected/1c10447e-0359-4448-9e95-f4952176901c-kube-api-access-9l6nn\") pod \"kube-storage-version-migrator-operator-b67b599dd-n42jc\" (UID: \"1c10447e-0359-4448-9e95-f4952176901c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.428190 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj8r9\" (UniqueName: \"kubernetes.io/projected/a7bd5924-9a3f-43cf-99b1-2d5d20975f81-kube-api-access-cj8r9\") pod \"control-plane-machine-set-operator-78cbb6b69f-6dwg7\" (UID: \"a7bd5924-9a3f-43cf-99b1-2d5d20975f81\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.453140 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8pqc\" (UniqueName: \"kubernetes.io/projected/9d7ca308-276f-4775-8828-abee226710b6-kube-api-access-c8pqc\") pod \"dns-operator-744455d44c-sncl8\" (UID: \"9d7ca308-276f-4775-8828-abee226710b6\") " pod="openshift-dns-operator/dns-operator-744455d44c-sncl8" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.467450 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:07 crc kubenswrapper[4933]: E1201 09:34:07.468024 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:07.968004582 +0000 UTC m=+138.609728197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.491459 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqncj\" (UniqueName: \"kubernetes.io/projected/4193888f-65e7-4818-b3e4-0dc9d1a32ed4-kube-api-access-zqncj\") pod \"machine-config-controller-84d6567774-6lzfq\" (UID: \"4193888f-65e7-4818-b3e4-0dc9d1a32ed4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.494694 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422"] Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.498594 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crs62\" (UniqueName: \"kubernetes.io/projected/a23201bd-20d0-4ba3-92c5-954403c260f9-kube-api-access-crs62\") pod \"service-ca-9c57cc56f-2clwd\" (UID: \"a23201bd-20d0-4ba3-92c5-954403c260f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.514053 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82g7d\" (UniqueName: \"kubernetes.io/projected/62125939-842d-45b3-824f-a45038a3226a-kube-api-access-82g7d\") pod \"cluster-samples-operator-665b6dd947-tg9q9\" (UID: \"62125939-842d-45b3-824f-a45038a3226a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.523409 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.530597 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv9w2\" (UniqueName: \"kubernetes.io/projected/a3422f33-b5ab-4658-86a0-c908efca7db9-kube-api-access-jv9w2\") pod \"collect-profiles-29409690-jvl8l\" (UID: \"a3422f33-b5ab-4658-86a0-c908efca7db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.538783 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vwm48"] Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.553514 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd"] Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.563633 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.572265 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: E1201 09:34:07.573389 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:08.07337023 +0000 UTC m=+138.715093845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.588691 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxhx5\" (UniqueName: \"kubernetes.io/projected/df074746-e6a5-490c-86e9-4e4c969ca5fe-kube-api-access-hxhx5\") pod \"machine-config-server-t76mv\" (UID: \"df074746-e6a5-490c-86e9-4e4c969ca5fe\") " pod="openshift-machine-config-operator/machine-config-server-t76mv" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.589051 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh"] Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.590958 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kckn\" (UniqueName: \"kubernetes.io/projected/3e4cfcb2-ffde-40ce-8934-6f63d1816e9b-kube-api-access-6kckn\") pod \"migrator-59844c95c7-vbvt7\" (UID: \"3e4cfcb2-ffde-40ce-8934-6f63d1816e9b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vbvt7" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.597043 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9k99\" (UniqueName: \"kubernetes.io/projected/a398625d-22e4-4bd7-a1e3-5231df797e36-kube-api-access-c9k99\") pod \"machine-config-operator-74547568cd-mjkjp\" (UID: \"a398625d-22e4-4bd7-a1e3-5231df797e36\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.603638 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.605788 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx878\" (UniqueName: \"kubernetes.io/projected/025dba90-61ee-4610-b166-737b0ef8b7b5-kube-api-access-rx878\") pod \"service-ca-operator-777779d784-84fbz\" (UID: \"025dba90-61ee-4610-b166-737b0ef8b7b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.611132 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.615660 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" event={"ID":"54960e89-4e49-4c21-bea4-cc46fcf8edba","Type":"ContainerStarted","Data":"1559601dd6befb3002cb2dbdbe9dd92056f311f9b557a5befdb5fc36180fbfa4"} Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.616706 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.618934 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" event={"ID":"5d1dbe4f-c837-4206-aa33-8ad657a3f4e5","Type":"ContainerStarted","Data":"47dd74ff0a0a9b1f44c592d45e3bca3ec06dd9a7d68f59fd035a23cef8f75e60"} Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.623973 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.628171 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" event={"ID":"46ceb35e-f316-481d-acdc-c61617f13e5f","Type":"ContainerStarted","Data":"f97af2d8782e3814824f70a89809c73d46a3b674e0ec983ffd60b1c70485ccf3"} Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.628229 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" event={"ID":"46ceb35e-f316-481d-acdc-c61617f13e5f","Type":"ContainerStarted","Data":"5e9aacd59f348c5081ed4e154bee982fe4b275d1ea874288b750b14d5e88c9f0"} Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.629271 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpdzg\" (UniqueName: \"kubernetes.io/projected/64c1704c-f0a1-4401-bc2d-46febb3ba534-kube-api-access-tpdzg\") pod \"csi-hostpathplugin-q5ch5\" (UID: \"64c1704c-f0a1-4401-bc2d-46febb3ba534\") " pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: W1201 09:34:07.632520 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode360f8b0_0f1b_4a9b_9aed_cd0a8976482a.slice/crio-8f081f82f07d03c15ba725528aa0304beabff151cfebd6c3b6ef3904bbb5bab2 WatchSource:0}: Error finding container 8f081f82f07d03c15ba725528aa0304beabff151cfebd6c3b6ef3904bbb5bab2: Status 404 returned error can't find the container with id 8f081f82f07d03c15ba725528aa0304beabff151cfebd6c3b6ef3904bbb5bab2 Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.634417 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x74qn" event={"ID":"45bbe65f-8e73-4b73-863c-15db667e3e22","Type":"ContainerStarted","Data":"02e468c3934f305934ecfa5e3d9730f9d52ab769b5b6e957d8e581fcdb65c26e"} Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.638440 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" event={"ID":"77d64c85-edec-42f6-9c3e-7bbbf04cc84e","Type":"ContainerStarted","Data":"56fb5c87f44740d8048c8cb4d5d642e3fd9ec8c193bc665a7aae3e094da9a60f"} Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.649946 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.656015 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sncl8" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.664115 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.665735 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqgl7\" (UniqueName: \"kubernetes.io/projected/d24a7072-f439-4db8-8de8-216470593141-kube-api-access-wqgl7\") pod \"ingress-canary-s62bx\" (UID: \"d24a7072-f439-4db8-8de8-216470593141\") " pod="openshift-ingress-canary/ingress-canary-s62bx" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.674230 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:07 crc kubenswrapper[4933]: E1201 09:34:07.676516 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:08.176493594 +0000 UTC m=+138.818217209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.677788 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vbvt7" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.678844 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.702457 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.713169 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-t76mv" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.719993 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s62bx" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.720605 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" event={"ID":"2020a25e-c390-4919-8f4f-3472caca4c14","Type":"ContainerStarted","Data":"a4a3c16a31f17346befa7d5331c4f40fd53684cb32fd9f2a2858b13640a01ff7"} Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.721110 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nwhhr" event={"ID":"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6","Type":"ContainerStarted","Data":"8bb8155feae339e0295058ac5afb705a5b14548f0ccf67db91826ff98b8d8fac"} Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.730047 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.731222 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" event={"ID":"da8f0888-39cd-4813-8f5b-ba725fb15ee5","Type":"ContainerStarted","Data":"8ec58aba24a5ae8a5de56f4e433a69795807b013ff523c443b7604f9e100fcfe"} Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.731319 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" event={"ID":"da8f0888-39cd-4813-8f5b-ba725fb15ee5","Type":"ContainerStarted","Data":"f7571434d519bef1247f50af03322b4d116b15f32e17bccf6b84003c5cf155bc"} Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.731472 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.735481 4933 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-v9qqn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.735580 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" podUID="da8f0888-39cd-4813-8f5b-ba725fb15ee5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.735591 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" event={"ID":"a594f887-35b0-4757-9522-e22b68536bca","Type":"ContainerStarted","Data":"a90fda7e7b3d1f3c9aff4bb18da893085fe2002e630ab5343fe42d6a66286c95"} Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.744111 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" event={"ID":"b805d945-8eed-48d3-9547-560266e5dfb1","Type":"ContainerStarted","Data":"ac9dffd47884a224173cbe8f39fd8e7655a02b0da4910f46d469b6ffe9130ecd"} Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.782193 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: E1201 09:34:07.787107 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:08.287079431 +0000 UTC m=+138.928803246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.883695 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:07 crc kubenswrapper[4933]: E1201 09:34:07.884298 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:08.384271727 +0000 UTC m=+139.025995342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.884654 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl"] Dec 01 09:34:07 crc kubenswrapper[4933]: I1201 09:34:07.985283 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:07 crc kubenswrapper[4933]: E1201 09:34:07.985760 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:08.48573769 +0000 UTC m=+139.127461525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.087968 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:08 crc kubenswrapper[4933]: E1201 09:34:08.088466 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:08.588450492 +0000 UTC m=+139.230174107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:08 crc kubenswrapper[4933]: W1201 09:34:08.157896 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf074746_e6a5_490c_86e9_4e4c969ca5fe.slice/crio-120c1c41232378c8d5deffa47bc6db483805a2399e8fc47dc4aaf7b6209e3d37 WatchSource:0}: Error finding container 120c1c41232378c8d5deffa47bc6db483805a2399e8fc47dc4aaf7b6209e3d37: Status 404 returned error can't find the container with id 120c1c41232378c8d5deffa47bc6db483805a2399e8fc47dc4aaf7b6209e3d37 Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.190472 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:08 crc kubenswrapper[4933]: E1201 09:34:08.191190 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:08.691158066 +0000 UTC m=+139.332881881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.234804 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.307707 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:08 crc kubenswrapper[4933]: E1201 09:34:08.308445 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:08.808412408 +0000 UTC m=+139.450136023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.412094 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:08 crc kubenswrapper[4933]: E1201 09:34:08.412594 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:08.912579658 +0000 UTC m=+139.554303283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.511745 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf"] Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.513453 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:08 crc kubenswrapper[4933]: E1201 09:34:08.513817 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:09.013802063 +0000 UTC m=+139.655525678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.555249 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g9j6m"] Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.633392 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:08 crc kubenswrapper[4933]: E1201 09:34:08.633811 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:09.133794804 +0000 UTC m=+139.775518409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.735202 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:08 crc kubenswrapper[4933]: E1201 09:34:08.735645 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:09.235629055 +0000 UTC m=+139.877352670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.769759 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" event={"ID":"77d64c85-edec-42f6-9c3e-7bbbf04cc84e","Type":"ContainerStarted","Data":"3a13154f00aec5b26540f267d6aac83e173335dbc9973ce320f5748698de7778"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.779668 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" event={"ID":"c78b2b58-b81d-4a67-b879-9812138fdd29","Type":"ContainerStarted","Data":"59637e0d43f1f306d44815b04ec77a872591906611a4dad9d9d805f7287b5995"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.792480 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" podStartSLOduration=120.792456952 podStartE2EDuration="2m0.792456952s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:08.790517415 +0000 UTC m=+139.432241020" watchObservedRunningTime="2025-12-01 09:34:08.792456952 +0000 UTC m=+139.434180567" Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.795065 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9j6m" event={"ID":"78d9587b-0bca-4439-8518-f652be926d70","Type":"ContainerStarted","Data":"f20c66aa799720958c449b4cffab60249f425b2c66909fc41f249d30dfdbb166"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.802549 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" event={"ID":"a594f887-35b0-4757-9522-e22b68536bca","Type":"ContainerStarted","Data":"141265a72c17d015c0bcb32bcf20bf1d78fec58166fe0f6ae365a129331400a6"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.812093 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" event={"ID":"9074f73d-a336-4a52-960b-b18e219d12a5","Type":"ContainerStarted","Data":"52ad7492bd987600e6bde7b78abe73830ee768bb54d662f19ce248f9cc2efe76"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.826062 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" podStartSLOduration=119.826041774 podStartE2EDuration="1m59.826041774s" podCreationTimestamp="2025-12-01 09:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:08.824862694 +0000 UTC m=+139.466586319" watchObservedRunningTime="2025-12-01 09:34:08.826041774 +0000 UTC m=+139.467765389" Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.837494 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" event={"ID":"590153a9-670a-4443-853c-e1bd935d57c3","Type":"ContainerStarted","Data":"46a18ccce9eff732aacef16d47882eb4969f74264463a709f8e00de49d875773"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.837662 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:08 crc kubenswrapper[4933]: E1201 09:34:08.838044 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:09.338028611 +0000 UTC m=+139.979752226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.845577 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wnvhn" event={"ID":"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a","Type":"ContainerStarted","Data":"8f081f82f07d03c15ba725528aa0304beabff151cfebd6c3b6ef3904bbb5bab2"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.857393 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-hpmth" podStartSLOduration=120.857374479 podStartE2EDuration="2m0.857374479s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:08.854675862 +0000 UTC m=+139.496399507" watchObservedRunningTime="2025-12-01 09:34:08.857374479 +0000 UTC m=+139.499098114" Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.860794 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nwhhr" event={"ID":"1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6","Type":"ContainerStarted","Data":"302f39eb0031f40f96b1ea882aa0c9d54dc48003ad65d7064374b7ff5446f86d"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.861585 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.863485 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" event={"ID":"fd22a22c-e9a1-4ca8-991c-100337423ece","Type":"ContainerStarted","Data":"5025cfb4070e35021a1562e42aed9152c5ea4aca46acca9dff92cb41597093ce"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.864960 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" event={"ID":"5d1dbe4f-c837-4206-aa33-8ad657a3f4e5","Type":"ContainerStarted","Data":"84ea6be6ae7c39cd1bdeb932c5d77e497517f3fd35a6b1b5196594db114bd1f4"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.868875 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-t76mv" event={"ID":"df074746-e6a5-490c-86e9-4e4c969ca5fe","Type":"ContainerStarted","Data":"120c1c41232378c8d5deffa47bc6db483805a2399e8fc47dc4aaf7b6209e3d37"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.873080 4933 patch_prober.go:28] interesting pod/console-operator-58897d9998-nwhhr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.873122 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nwhhr" podUID="1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.882875 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" event={"ID":"39d14064-58a6-4a37-9a8f-2e3fdf93c46a","Type":"ContainerStarted","Data":"2f05e1e571b2ebcaddc93ae9f1ea2e2276a2a0b7dacfa2347039b5a9b7257ca3"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.886969 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-x74qn" podStartSLOduration=120.886947062 podStartE2EDuration="2m0.886947062s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:08.883050705 +0000 UTC m=+139.524774320" watchObservedRunningTime="2025-12-01 09:34:08.886947062 +0000 UTC m=+139.528670677" Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.898561 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" event={"ID":"b805d945-8eed-48d3-9547-560266e5dfb1","Type":"ContainerStarted","Data":"e68bdc55386bd4e1f395356a06ccd2194d56192195e739e18da415599b166c27"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.899283 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.903570 4933 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4wqht container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.903655 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" podUID="b805d945-8eed-48d3-9547-560266e5dfb1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.909746 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" event={"ID":"1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107","Type":"ContainerStarted","Data":"9f0eb0a19986dbafc390fe5782102bf6e23e76b6e5db3964472cc36ec44bd430"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.909794 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" event={"ID":"1bbd08ce-3fa6-41a5-a2cd-2bc4b6188107","Type":"ContainerStarted","Data":"9fdea8c7b149745321f2acd0cf44501df7766e9257429f07fb986f44fb486668"} Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.919774 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nwhhr" podStartSLOduration=120.919743924 podStartE2EDuration="2m0.919743924s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:08.916700028 +0000 UTC m=+139.558423653" watchObservedRunningTime="2025-12-01 09:34:08.919743924 +0000 UTC m=+139.561467539" Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.921426 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.938916 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:08 crc kubenswrapper[4933]: E1201 09:34:08.940453 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:09.440427816 +0000 UTC m=+140.082151431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:08 crc kubenswrapper[4933]: I1201 09:34:08.959587 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8pjf2" podStartSLOduration=120.959563769 podStartE2EDuration="2m0.959563769s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:08.957324384 +0000 UTC m=+139.599047999" watchObservedRunningTime="2025-12-01 09:34:08.959563769 +0000 UTC m=+139.601287384" Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.035274 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" podStartSLOduration=121.035245333 podStartE2EDuration="2m1.035245333s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:09.033474449 +0000 UTC m=+139.675198064" watchObservedRunningTime="2025-12-01 09:34:09.035245333 +0000 UTC m=+139.676968948" Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.041217 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:09 crc kubenswrapper[4933]: E1201 09:34:09.041596 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:09.54158002 +0000 UTC m=+140.183303635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.141964 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:09 crc kubenswrapper[4933]: E1201 09:34:09.142380 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:09.642364355 +0000 UTC m=+140.284087970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.211847 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xdrhr"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.217050 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.256030 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:09 crc kubenswrapper[4933]: E1201 09:34:09.256457 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:09.756434939 +0000 UTC m=+140.398158554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:09 crc kubenswrapper[4933]: W1201 09:34:09.256490 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod375b62a4_7c53_4d18_8bf9_f9378321a8de.slice/crio-f47c98569e3807f80be46d3e5d4e82c8b3e72406c33128f048fd5666b3d8c720 WatchSource:0}: Error finding container f47c98569e3807f80be46d3e5d4e82c8b3e72406c33128f048fd5666b3d8c720: Status 404 returned error can't find the container with id f47c98569e3807f80be46d3e5d4e82c8b3e72406c33128f048fd5666b3d8c720 Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.281728 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.320581 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.357721 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:09 crc kubenswrapper[4933]: E1201 09:34:09.357919 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:09.857867771 +0000 UTC m=+140.499591386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.358076 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:09 crc kubenswrapper[4933]: E1201 09:34:09.358514 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:09.858497966 +0000 UTC m=+140.500221581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.394488 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.397116 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lmgsj"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.427432 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-v4fq8"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.444046 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.459600 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:09 crc kubenswrapper[4933]: E1201 09:34:09.460015 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:09.959981809 +0000 UTC m=+140.601705424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.463336 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zj2bn"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.483644 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc"] Dec 01 09:34:09 crc kubenswrapper[4933]: W1201 09:34:09.507562 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod797e5656_fa46_48f0_a336_40560b3da3a5.slice/crio-4d098fa63b14342ba73eb62488c723357085c56fa6f4ac8fa90bcceeaf6d86bf WatchSource:0}: Error finding container 4d098fa63b14342ba73eb62488c723357085c56fa6f4ac8fa90bcceeaf6d86bf: Status 404 returned error can't find the container with id 4d098fa63b14342ba73eb62488c723357085c56fa6f4ac8fa90bcceeaf6d86bf Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.527517 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-84fbz"] Dec 01 09:34:09 crc kubenswrapper[4933]: W1201 09:34:09.532830 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c10447e_0359_4448_9e95_f4952176901c.slice/crio-b3c34ba816d584a1a6ec378b48f1ca702e7dc4dc8c720e4eb324dd04f56f3867 WatchSource:0}: Error finding container b3c34ba816d584a1a6ec378b48f1ca702e7dc4dc8c720e4eb324dd04f56f3867: Status 404 returned error can't find the container with id b3c34ba816d584a1a6ec378b48f1ca702e7dc4dc8c720e4eb324dd04f56f3867 Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.560913 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:09 crc kubenswrapper[4933]: E1201 09:34:09.561529 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:10.061506022 +0000 UTC m=+140.703229827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.662411 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:09 crc kubenswrapper[4933]: E1201 09:34:09.663243 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:10.1632264 +0000 UTC m=+140.804950015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:09 crc kubenswrapper[4933]: W1201 09:34:09.689794 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc121048e_9df5_412a_9d86_e7cf8a59d0e1.slice/crio-8af91ecc530e5161a0766170261d198e5f820a1a1b2d1254ff5bd0ddd5629556 WatchSource:0}: Error finding container 8af91ecc530e5161a0766170261d198e5f820a1a1b2d1254ff5bd0ddd5629556: Status 404 returned error can't find the container with id 8af91ecc530e5161a0766170261d198e5f820a1a1b2d1254ff5bd0ddd5629556 Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.771755 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:09 crc kubenswrapper[4933]: E1201 09:34:09.772064 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:10.272052335 +0000 UTC m=+140.913775950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.847378 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.852818 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.861468 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q5ch5"] Dec 01 09:34:09 crc kubenswrapper[4933]: E1201 09:34:09.873455 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:10.373397214 +0000 UTC m=+141.015120839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.872974 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.877361 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:09 crc kubenswrapper[4933]: E1201 09:34:09.878282 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:10.378266045 +0000 UTC m=+141.019989660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.883055 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.905279 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s62bx"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.923223 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2clwd"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.944223 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" event={"ID":"51766f22-0ddf-4f2e-bbbd-059431d6ef4e","Type":"ContainerStarted","Data":"77a81dc9ae151d0c3aa2e668a7bfc68b16dca58e6b2bd656822b91699ff537d8"} Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.945757 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" event={"ID":"1c10447e-0359-4448-9e95-f4952176901c","Type":"ContainerStarted","Data":"b3c34ba816d584a1a6ec378b48f1ca702e7dc4dc8c720e4eb324dd04f56f3867"} Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.950900 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sncl8"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.958413 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vbvt7"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.959812 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" event={"ID":"91787165-ff9b-4d6c-8d81-a6efc4bdb19a","Type":"ContainerStarted","Data":"96c5a6ca24dd89c270bdecf5c1a93f0ba648ff96d5a95b21d056ce7ace4ae56d"} Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.969688 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" event={"ID":"025dba90-61ee-4610-b166-737b0ef8b7b5","Type":"ContainerStarted","Data":"6c1aedb8773d2f08a8ee851b3bbbabeabe887302aebb0ad18ee9b0165915fe3f"} Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.971929 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.977506 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.978253 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:09 crc kubenswrapper[4933]: E1201 09:34:09.978568 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:10.478552657 +0000 UTC m=+141.120276272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.980437 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7"] Dec 01 09:34:09 crc kubenswrapper[4933]: I1201 09:34:09.996531 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" event={"ID":"325978fb-e819-4d17-af79-821ee41da615","Type":"ContainerStarted","Data":"07eb57065b25d16afdd356e9cc15152a434cd513798053fcdb398c7f4a4f8dcd"} Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:09.999253 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" event={"ID":"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4","Type":"ContainerStarted","Data":"8d82b61b81ae37b85667636dc3ff9c5b3c9cc14c866c991f75317fc0f74e1bdd"} Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.004340 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lmgsj" event={"ID":"797e5656-fa46-48f0-a336-40560b3da3a5","Type":"ContainerStarted","Data":"4d098fa63b14342ba73eb62488c723357085c56fa6f4ac8fa90bcceeaf6d86bf"} Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.005981 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" event={"ID":"29cdc67d-6d2a-44b2-bd31-3634aff7f52e","Type":"ContainerStarted","Data":"a096fd1153891d8a0ecf78e8e60e6c23d44b59b4740925ed934c6e05131eb50b"} Dec 01 09:34:10 crc kubenswrapper[4933]: W1201 09:34:10.008092 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda23201bd_20d0_4ba3_92c5_954403c260f9.slice/crio-44fa4d5a7d3f9ba0e7f6cf80bf9615741a19a1055fef78a27038e9b0efcb9ad9 WatchSource:0}: Error finding container 44fa4d5a7d3f9ba0e7f6cf80bf9615741a19a1055fef78a27038e9b0efcb9ad9: Status 404 returned error can't find the container with id 44fa4d5a7d3f9ba0e7f6cf80bf9615741a19a1055fef78a27038e9b0efcb9ad9 Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.008207 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-v4fq8" event={"ID":"c121048e-9df5-412a-9d86-e7cf8a59d0e1","Type":"ContainerStarted","Data":"8af91ecc530e5161a0766170261d198e5f820a1a1b2d1254ff5bd0ddd5629556"} Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.017032 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" event={"ID":"e69273f1-a871-402d-bb09-2150ee1134b1","Type":"ContainerStarted","Data":"fb0bfc753f0c2d9839e45dbffb49dd7c5281e4051d6ca817460a86622b278054"} Dec 01 09:34:10 crc kubenswrapper[4933]: W1201 09:34:10.027619 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4193888f_65e7_4818_b3e4_0dc9d1a32ed4.slice/crio-57723c144f836393e641cd654b1bb74c71747a015216cdd8db35b03ea5b4e0f1 WatchSource:0}: Error finding container 57723c144f836393e641cd654b1bb74c71747a015216cdd8db35b03ea5b4e0f1: Status 404 returned error can't find the container with id 57723c144f836393e641cd654b1bb74c71747a015216cdd8db35b03ea5b4e0f1 Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.029147 4933 generic.go:334] "Generic (PLEG): container finished" podID="2020a25e-c390-4919-8f4f-3472caca4c14" containerID="4d01a74f592ef20cb417c9cb86732ef28e9a7c411d18178a7baf22955c7ba324" exitCode=0 Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.029238 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" event={"ID":"2020a25e-c390-4919-8f4f-3472caca4c14","Type":"ContainerDied","Data":"4d01a74f592ef20cb417c9cb86732ef28e9a7c411d18178a7baf22955c7ba324"} Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.032400 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" event={"ID":"375b62a4-7c53-4d18-8bf9-f9378321a8de","Type":"ContainerStarted","Data":"f47c98569e3807f80be46d3e5d4e82c8b3e72406c33128f048fd5666b3d8c720"} Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.036688 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wnvhn" event={"ID":"e360f8b0-0f1b-4a9b-9aed-cd0a8976482a","Type":"ContainerStarted","Data":"be0e4c40c2a0554565ca6075776e4fab53baf49ac8546e122c62aa6caed549e5"} Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.038168 4933 patch_prober.go:28] interesting pod/console-operator-58897d9998-nwhhr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.040500 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nwhhr" podUID="1d81d0cc-3c5d-4b01-8a80-51dafb39c1f6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 01 09:34:10 crc kubenswrapper[4933]: W1201 09:34:10.050902 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d7ca308_276f_4775_8828_abee226710b6.slice/crio-d24686d5d3d739c1aaccaec5f8141458d1313fc65dc8ffd524e0190b4e516d7c WatchSource:0}: Error finding container d24686d5d3d739c1aaccaec5f8141458d1313fc65dc8ffd524e0190b4e516d7c: Status 404 returned error can't find the container with id d24686d5d3d739c1aaccaec5f8141458d1313fc65dc8ffd524e0190b4e516d7c Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.080428 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:10 crc kubenswrapper[4933]: E1201 09:34:10.084810 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:10.584791057 +0000 UTC m=+141.226514872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.181192 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:10 crc kubenswrapper[4933]: E1201 09:34:10.181451 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:10.681417789 +0000 UTC m=+141.323141424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.181914 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:10 crc kubenswrapper[4933]: E1201 09:34:10.182417 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:10.682406544 +0000 UTC m=+141.324130159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.283498 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:10 crc kubenswrapper[4933]: E1201 09:34:10.284112 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:10.784084741 +0000 UTC m=+141.425808356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.388711 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:10 crc kubenswrapper[4933]: E1201 09:34:10.389131 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:10.889113511 +0000 UTC m=+141.530837116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.489617 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:10 crc kubenswrapper[4933]: E1201 09:34:10.489801 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:10.989774563 +0000 UTC m=+141.631498178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.490070 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:10 crc kubenswrapper[4933]: E1201 09:34:10.490408 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:10.990399749 +0000 UTC m=+141.632123354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.591686 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:10 crc kubenswrapper[4933]: E1201 09:34:10.592235 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:11.092216009 +0000 UTC m=+141.733939624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.694068 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:10 crc kubenswrapper[4933]: E1201 09:34:10.697212 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:11.197196359 +0000 UTC m=+141.838919974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.798243 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:10 crc kubenswrapper[4933]: E1201 09:34:10.798901 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:11.298858876 +0000 UTC m=+141.940582491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:10 crc kubenswrapper[4933]: I1201 09:34:10.899991 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:10 crc kubenswrapper[4933]: E1201 09:34:10.900544 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:11.400524213 +0000 UTC m=+142.042247828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.001747 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.001951 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:11.501909013 +0000 UTC m=+142.143632618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.002322 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.002760 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:11.502753234 +0000 UTC m=+142.144476849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.042096 4933 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4wqht container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.042605 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" podUID="b805d945-8eed-48d3-9547-560266e5dfb1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.102748 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" event={"ID":"39d14064-58a6-4a37-9a8f-2e3fdf93c46a","Type":"ContainerStarted","Data":"ef5a97cfc6c0939edbef578aed4b5a5f705648cf3e9aa25374d40b3c02dd3100"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.105048 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.105216 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:11.60519056 +0000 UTC m=+142.246914175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.105364 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.105925 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:11.605909358 +0000 UTC m=+142.247632973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.122277 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" event={"ID":"29cdc67d-6d2a-44b2-bd31-3634aff7f52e","Type":"ContainerStarted","Data":"576969d61f008af32620c0a50c5a8ac5a57e490e6c121c4610608f05c84bb674"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.131924 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9j6m" event={"ID":"78d9587b-0bca-4439-8518-f652be926d70","Type":"ContainerStarted","Data":"041bcc5ab86585b46a08a1648279e4e8fe9957dd3dc3c4c5753d1fb2aa85a654"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.159111 4933 generic.go:334] "Generic (PLEG): container finished" podID="77d64c85-edec-42f6-9c3e-7bbbf04cc84e" containerID="3a13154f00aec5b26540f267d6aac83e173335dbc9973ce320f5748698de7778" exitCode=0 Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.159244 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" event={"ID":"77d64c85-edec-42f6-9c3e-7bbbf04cc84e","Type":"ContainerDied","Data":"3a13154f00aec5b26540f267d6aac83e173335dbc9973ce320f5748698de7778"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.168162 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" event={"ID":"1c10447e-0359-4448-9e95-f4952176901c","Type":"ContainerStarted","Data":"ca73d17de631f0c90982f024834ba91a23c5bbf744521f5d50b6d43ba32dd4b4"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.173804 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" event={"ID":"375b62a4-7c53-4d18-8bf9-f9378321a8de","Type":"ContainerStarted","Data":"aa70c297425bb6b6482ad102cd86d4a8b3611fe6b5cdf19ed26b663c138a3823"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.204264 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" event={"ID":"4193888f-65e7-4818-b3e4-0dc9d1a32ed4","Type":"ContainerStarted","Data":"57723c144f836393e641cd654b1bb74c71747a015216cdd8db35b03ea5b4e0f1"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.206190 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.206470 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:11.706448717 +0000 UTC m=+142.348172332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.206602 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.208428 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:11.708407936 +0000 UTC m=+142.350131551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.230740 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" event={"ID":"a23201bd-20d0-4ba3-92c5-954403c260f9","Type":"ContainerStarted","Data":"44fa4d5a7d3f9ba0e7f6cf80bf9615741a19a1055fef78a27038e9b0efcb9ad9"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.245195 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sncl8" event={"ID":"9d7ca308-276f-4775-8828-abee226710b6","Type":"ContainerStarted","Data":"d24686d5d3d739c1aaccaec5f8141458d1313fc65dc8ffd524e0190b4e516d7c"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.291902 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" event={"ID":"a398625d-22e4-4bd7-a1e3-5231df797e36","Type":"ContainerStarted","Data":"2e489f1f51e9a90d9b4b6bee6c1dbfb99334cc18dd694e6f6ea055550e6cf2d5"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.295824 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" event={"ID":"a3422f33-b5ab-4658-86a0-c908efca7db9","Type":"ContainerStarted","Data":"8b270b6914b313d05c1ce07f1d1af2245806be54cb7bde2c5d40f9715e90daac"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.302622 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7" event={"ID":"a7bd5924-9a3f-43cf-99b1-2d5d20975f81","Type":"ContainerStarted","Data":"58a2f3642b097d8151c33e8c9d854ea45c39d50a8e1c6cf9d648c0f1c6be6246"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.309883 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.310166 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:11.810134834 +0000 UTC m=+142.451858449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.310850 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.311324 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:11.811289213 +0000 UTC m=+142.453012828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.325737 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s62bx" event={"ID":"d24a7072-f439-4db8-8de8-216470593141","Type":"ContainerStarted","Data":"f983f74e655189f4c53fe05ac107761bdb2f115691e077a9d16747abd03d6777"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.328866 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" podStartSLOduration=122.328840538 podStartE2EDuration="2m2.328840538s" podCreationTimestamp="2025-12-01 09:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.320748057 +0000 UTC m=+141.962471682" watchObservedRunningTime="2025-12-01 09:34:11.328840538 +0000 UTC m=+141.970564153" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.335802 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.340291 4933 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-n22nk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.340428 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" podUID="9b3e0393-fe92-4ca0-b9ce-85bceddbfad4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.349215 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.349466 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7" podStartSLOduration=123.349447167 podStartE2EDuration="2m3.349447167s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.347164871 +0000 UTC m=+141.988888476" watchObservedRunningTime="2025-12-01 09:34:11.349447167 +0000 UTC m=+141.991170782" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.358744 4933 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zj2bn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.358792 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" podUID="51766f22-0ddf-4f2e-bbbd-059431d6ef4e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.393118 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vbvt7" event={"ID":"3e4cfcb2-ffde-40ce-8934-6f63d1816e9b","Type":"ContainerStarted","Data":"beadd4208c3ffbdfa17d8def87f96956cadd0f9e5a6e83cdc3beac524953c45d"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.407017 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" podStartSLOduration=122.406994392 podStartE2EDuration="2m2.406994392s" podCreationTimestamp="2025-12-01 09:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.39075182 +0000 UTC m=+142.032475435" watchObservedRunningTime="2025-12-01 09:34:11.406994392 +0000 UTC m=+142.048718007" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.412693 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.415223 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:11.915194485 +0000 UTC m=+142.556918100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.416085 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.434118 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-v4fq8" event={"ID":"c121048e-9df5-412a-9d86-e7cf8a59d0e1","Type":"ContainerStarted","Data":"9db326659e57cdc252eb36a73fab7912c3aecd1214e5b78f3fae5552288efac2"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.434372 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-v4fq8" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.434692 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-s62bx" podStartSLOduration=7.434678357 podStartE2EDuration="7.434678357s" podCreationTimestamp="2025-12-01 09:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.429252113 +0000 UTC m=+142.070975738" watchObservedRunningTime="2025-12-01 09:34:11.434678357 +0000 UTC m=+142.076401972" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.440124 4933 patch_prober.go:28] interesting pod/downloads-7954f5f757-v4fq8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.440199 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v4fq8" podUID="c121048e-9df5-412a-9d86-e7cf8a59d0e1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.449788 4933 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-p4mnm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.449885 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" podUID="325978fb-e819-4d17-af79-821ee41da615" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.514034 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.515327 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:12.015295483 +0000 UTC m=+142.657019088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.533789 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-t76mv" event={"ID":"df074746-e6a5-490c-86e9-4e4c969ca5fe","Type":"ContainerStarted","Data":"f518809dd98247558723509964ceebb8d7c819f79bf4ca63904924fb0d801206"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.535207 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" podStartSLOduration=123.535196846 podStartE2EDuration="2m3.535196846s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.534428856 +0000 UTC m=+142.176152471" watchObservedRunningTime="2025-12-01 09:34:11.535196846 +0000 UTC m=+142.176920461" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.535340 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" podStartSLOduration=123.535336239 podStartE2EDuration="2m3.535336239s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.499715228 +0000 UTC m=+142.141438863" watchObservedRunningTime="2025-12-01 09:34:11.535336239 +0000 UTC m=+142.177059854" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.561811 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" event={"ID":"8cf4d910-9922-451e-a6b6-1fa833027e5d","Type":"ContainerStarted","Data":"173f6ee0dba2833bc41c5fc1d9865c26b64b1a96478290179cacf27206487807"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.570149 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9" event={"ID":"62125939-842d-45b3-824f-a45038a3226a","Type":"ContainerStarted","Data":"4e9aac7cba634e044691672d13e4a53d096c14c8cf96170d1b16ee4249b3ff4e"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.572181 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" event={"ID":"9074f73d-a336-4a52-960b-b18e219d12a5","Type":"ContainerStarted","Data":"3aebc40b0ae03d475628cc6031cc0865475fc330252e3b1a3b11ad8eb68d1c6d"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.573140 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.588330 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" event={"ID":"64c1704c-f0a1-4401-bc2d-46febb3ba534","Type":"ContainerStarted","Data":"a50e23a047f3da18fc26f788badbe68617f78cfc384fd3171b80856ddb0e10e3"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.596713 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" event={"ID":"590153a9-670a-4443-853c-e1bd935d57c3","Type":"ContainerStarted","Data":"ab505dd7a521324987eb4bf4faa730821db05be3d9694db0c714c8ed9a019106"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.598241 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" podStartSLOduration=123.598211926 podStartE2EDuration="2m3.598211926s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.597656003 +0000 UTC m=+142.239379618" watchObservedRunningTime="2025-12-01 09:34:11.598211926 +0000 UTC m=+142.239935551" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.615953 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.617937 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:12.117907474 +0000 UTC m=+142.759631089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.624749 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" event={"ID":"fd22a22c-e9a1-4ca8-991c-100337423ece","Type":"ContainerStarted","Data":"0b2fae80243a8b1a3bac0304a9c06d07befbbfa9e1f2955f44411cbc0b6ca931"} Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.625530 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" podStartSLOduration=123.625512751 podStartE2EDuration="2m3.625512751s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.62342365 +0000 UTC m=+142.265147265" watchObservedRunningTime="2025-12-01 09:34:11.625512751 +0000 UTC m=+142.267236366" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.653730 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-v4fq8" podStartSLOduration=123.65371017 podStartE2EDuration="2m3.65371017s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.652432999 +0000 UTC m=+142.294156614" watchObservedRunningTime="2025-12-01 09:34:11.65371017 +0000 UTC m=+142.295433785" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.688653 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-t76mv" podStartSLOduration=7.688633385 podStartE2EDuration="7.688633385s" podCreationTimestamp="2025-12-01 09:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.683214301 +0000 UTC m=+142.324937906" watchObservedRunningTime="2025-12-01 09:34:11.688633385 +0000 UTC m=+142.330357000" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.717765 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.721000 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:12.220987946 +0000 UTC m=+142.862711561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.730626 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vwm48" podStartSLOduration=123.730601233 podStartE2EDuration="2m3.730601233s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.711333597 +0000 UTC m=+142.353057222" watchObservedRunningTime="2025-12-01 09:34:11.730601233 +0000 UTC m=+142.372324838" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.743668 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.743728 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.757188 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wnvhn" podStartSLOduration=123.757160521 podStartE2EDuration="2m3.757160521s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.745437621 +0000 UTC m=+142.387161256" watchObservedRunningTime="2025-12-01 09:34:11.757160521 +0000 UTC m=+142.398884136" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.820161 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.826931 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:12.326904058 +0000 UTC m=+142.968627683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.830097 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.830635 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:12.33062123 +0000 UTC m=+142.972344835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.842772 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rt422" podStartSLOduration=123.842751611 podStartE2EDuration="2m3.842751611s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.790175688 +0000 UTC m=+142.431899303" watchObservedRunningTime="2025-12-01 09:34:11.842751611 +0000 UTC m=+142.484475216" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.912162 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" podStartSLOduration=123.912146958 podStartE2EDuration="2m3.912146958s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.910134529 +0000 UTC m=+142.551858144" watchObservedRunningTime="2025-12-01 09:34:11.912146958 +0000 UTC m=+142.553870583" Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.931206 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:11 crc kubenswrapper[4933]: E1201 09:34:11.931708 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:12.431684272 +0000 UTC m=+143.073407887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:11 crc kubenswrapper[4933]: I1201 09:34:11.989220 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qbr9b" podStartSLOduration=123.989191295 podStartE2EDuration="2m3.989191295s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.948051967 +0000 UTC m=+142.589775582" watchObservedRunningTime="2025-12-01 09:34:11.989191295 +0000 UTC m=+142.630914900" Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.034028 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:12 crc kubenswrapper[4933]: E1201 09:34:12.035561 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:12.535540813 +0000 UTC m=+143.177264428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.126169 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.134969 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:12 crc kubenswrapper[4933]: E1201 09:34:12.135345 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:12.635326974 +0000 UTC m=+143.277050589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.139552 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:12 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:12 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:12 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.139616 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.240372 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:12 crc kubenswrapper[4933]: E1201 09:34:12.240958 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:12.740937378 +0000 UTC m=+143.382660993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.342359 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:12 crc kubenswrapper[4933]: E1201 09:34:12.342565 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:12.842530584 +0000 UTC m=+143.484254189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.343197 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:12 crc kubenswrapper[4933]: E1201 09:34:12.343548 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:12.843540578 +0000 UTC m=+143.485264193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.445952 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:12 crc kubenswrapper[4933]: E1201 09:34:12.446470 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:12.946450266 +0000 UTC m=+143.588173881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.549417 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:12 crc kubenswrapper[4933]: E1201 09:34:12.549865 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:13.049845006 +0000 UTC m=+143.691568621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.650669 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:12 crc kubenswrapper[4933]: E1201 09:34:12.650943 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:13.150911228 +0000 UTC m=+143.792634863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.651108 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:12 crc kubenswrapper[4933]: E1201 09:34:12.651528 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:13.151517683 +0000 UTC m=+143.793241298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.666149 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vbvt7" event={"ID":"3e4cfcb2-ffde-40ce-8934-6f63d1816e9b","Type":"ContainerStarted","Data":"ee6bdf742cd40c6db5c07587e2ee022d4899fe5cac5c07beb92aeaf6d258c8ed"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.674210 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsjj4" event={"ID":"91787165-ff9b-4d6c-8d81-a6efc4bdb19a","Type":"ContainerStarted","Data":"af3a314e381e81550ad56dae7c5cb7f6d747866c718808e1be2e8986f66f69a2"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.680054 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" event={"ID":"375b62a4-7c53-4d18-8bf9-f9378321a8de","Type":"ContainerStarted","Data":"83e3fbfc9e79e0576a15f7ef89928bf0e5186f8477274aed4203baee38b1d369"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.688768 4933 generic.go:334] "Generic (PLEG): container finished" podID="39d14064-58a6-4a37-9a8f-2e3fdf93c46a" containerID="ef5a97cfc6c0939edbef578aed4b5a5f705648cf3e9aa25374d40b3c02dd3100" exitCode=0 Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.688889 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" event={"ID":"39d14064-58a6-4a37-9a8f-2e3fdf93c46a","Type":"ContainerDied","Data":"ef5a97cfc6c0939edbef578aed4b5a5f705648cf3e9aa25374d40b3c02dd3100"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.688934 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" event={"ID":"39d14064-58a6-4a37-9a8f-2e3fdf93c46a","Type":"ContainerStarted","Data":"b12a8bf8b666d92d991ae819f14341aaec22a2924172a9fcbfbdcbd1ca870f67"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.699840 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" event={"ID":"64c1704c-f0a1-4401-bc2d-46febb3ba534","Type":"ContainerStarted","Data":"72ad28fd960be499a76f55eca452122412bad51a011688176f5f6fbe768b7c2b"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.706812 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" event={"ID":"a3422f33-b5ab-4658-86a0-c908efca7db9","Type":"ContainerStarted","Data":"a48d1ad0a3c4f2dc67a0ef25a006868e41fb607592f16bc6a8234a203356793e"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.717476 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" event={"ID":"8cf4d910-9922-451e-a6b6-1fa833027e5d","Type":"ContainerStarted","Data":"e84cc2432e0bb313b8ef6f485ef53d88e36424149cd6969a4eeb71c6072ef800"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.725402 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s62bx" event={"ID":"d24a7072-f439-4db8-8de8-216470593141","Type":"ContainerStarted","Data":"c0421f9a94534fbf335874da98c93ff7e6b18f5fe0171de3b07a332d34324318"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.742529 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" event={"ID":"9074f73d-a336-4a52-960b-b18e219d12a5","Type":"ContainerStarted","Data":"e1d6dfa6b415e1b957404e37ff885fb99911223931f6de32f41c507aeed04e77"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.753083 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:12 crc kubenswrapper[4933]: E1201 09:34:12.755175 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:13.255147028 +0000 UTC m=+143.896870653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.756547 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" event={"ID":"29cdc67d-6d2a-44b2-bd31-3634aff7f52e","Type":"ContainerStarted","Data":"885f1353515ffdb52df2647ef9d4b46e836e209c6bd21964e0ecdcb5290ca4c5"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.772763 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" event={"ID":"51766f22-0ddf-4f2e-bbbd-059431d6ef4e","Type":"ContainerStarted","Data":"4da124f76afb921651a0cc3d7b12bbda7c8b8019705c877125b1e8c462af52a8"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.774551 4933 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zj2bn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.774745 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" podUID="51766f22-0ddf-4f2e-bbbd-059431d6ef4e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.787797 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9j6m" event={"ID":"78d9587b-0bca-4439-8518-f652be926d70","Type":"ContainerStarted","Data":"38cda5bad1bcc75260e106dabe9e673c8dba31d042e972a46a53ae4e5f83e0ee"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.794158 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" event={"ID":"a398625d-22e4-4bd7-a1e3-5231df797e36","Type":"ContainerStarted","Data":"ff475bf8489778103b906026fb79fa36d916cc13a4bff33fe889de49dac83f37"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.794220 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" event={"ID":"a398625d-22e4-4bd7-a1e3-5231df797e36","Type":"ContainerStarted","Data":"05b56176bdfc3af627d66c049a1c6aeee729a7303d79b3c582ad67efa429c537"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.805662 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6dwg7" event={"ID":"a7bd5924-9a3f-43cf-99b1-2d5d20975f81","Type":"ContainerStarted","Data":"5e07d676ae299b99bf6edabcc5b723e1dd2706fd32c1801d3cd5cf96195aa3b5"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.829289 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" event={"ID":"a23201bd-20d0-4ba3-92c5-954403c260f9","Type":"ContainerStarted","Data":"34947bd61137cdfc1467413e680ccfb905c1697b4434a7f2a93ebe3124450c34"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.838512 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lmgsj" event={"ID":"797e5656-fa46-48f0-a336-40560b3da3a5","Type":"ContainerStarted","Data":"4af73d275b84cd32a1b6e5c67d6707ba88713b0457a6bfb9d6c68f777e82a30b"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.838569 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lmgsj" event={"ID":"797e5656-fa46-48f0-a336-40560b3da3a5","Type":"ContainerStarted","Data":"e89dbc17edf56676ba3548258a155ad69f9df59bc9da0b3d01b104cad8a71c4d"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.839333 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.850199 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htq9g" event={"ID":"e69273f1-a871-402d-bb09-2150ee1134b1","Type":"ContainerStarted","Data":"caae866cde10b5e6f6e5bb089afee7106bfaa77417a9ec0fe2c3b675fa527920"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.856879 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" event={"ID":"025dba90-61ee-4610-b166-737b0ef8b7b5","Type":"ContainerStarted","Data":"fc5a616cadb88760783ebd95fd7dc3fe1c2685ab235f3de1bbf23f243af394ee"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.858604 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:12 crc kubenswrapper[4933]: E1201 09:34:12.860947 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:13.360922078 +0000 UTC m=+144.002645693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.876484 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" event={"ID":"c78b2b58-b81d-4a67-b879-9812138fdd29","Type":"ContainerStarted","Data":"3cef8a3fafad4a30a0b190fb4ffe67be1f30653d40db36c170609c2ff111c250"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.877719 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.891608 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" event={"ID":"9b3e0393-fe92-4ca0-b9ce-85bceddbfad4","Type":"ContainerStarted","Data":"205367b7444979001e1a32881f6f19ebe4d1b662a3c8c35c2c8ef571cd49f353"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.899823 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" event={"ID":"4193888f-65e7-4818-b3e4-0dc9d1a32ed4","Type":"ContainerStarted","Data":"466218c1c0b5cd93101d2ae2be74103bb2cd3f348516742b5debbc3f3d529b0f"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.899862 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" event={"ID":"4193888f-65e7-4818-b3e4-0dc9d1a32ed4","Type":"ContainerStarted","Data":"db09d1a07294702da371bc5051d3b05070f8de7be4a0c7fc014b40a816fc05ed"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.910197 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9" event={"ID":"62125939-842d-45b3-824f-a45038a3226a","Type":"ContainerStarted","Data":"b5630bee3452d68786e675a30f9b1093172c76c7fabe3722ce720374297361d2"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.910259 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9" event={"ID":"62125939-842d-45b3-824f-a45038a3226a","Type":"ContainerStarted","Data":"9844ca7cf589639201fe5d0f4b7f4dc8cfb6db62216044ef0ba651cb9b66eb0a"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.922422 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.933526 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" event={"ID":"325978fb-e819-4d17-af79-821ee41da615","Type":"ContainerStarted","Data":"aa6497c32c91c65af7504ded51f447dd9ac9ee827fc604478ff78ceca085147a"} Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.942239 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4mnm" Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.959680 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzxkd" podStartSLOduration=124.959648431 podStartE2EDuration="2m4.959648431s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:11.991914713 +0000 UTC m=+142.633638328" watchObservedRunningTime="2025-12-01 09:34:12.959648431 +0000 UTC m=+143.601372046" Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.961398 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dnfn7" podStartSLOduration=124.961390875 podStartE2EDuration="2m4.961390875s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:12.957730994 +0000 UTC m=+143.599454619" watchObservedRunningTime="2025-12-01 09:34:12.961390875 +0000 UTC m=+143.603114490" Dec 01 09:34:12 crc kubenswrapper[4933]: I1201 09:34:12.961821 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:12 crc kubenswrapper[4933]: E1201 09:34:12.966369 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:13.466342297 +0000 UTC m=+144.108065912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.002228 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sncl8" event={"ID":"9d7ca308-276f-4775-8828-abee226710b6","Type":"ContainerStarted","Data":"d830ba83aa2721e4fdea6aedf897ee6541225deb1dfef48d7077e26fb20d376c"} Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.015124 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" event={"ID":"77d64c85-edec-42f6-9c3e-7bbbf04cc84e","Type":"ContainerStarted","Data":"f8a0e01cdd9dc96ac87fc51bd7350b6fd608375b6c619f1b59a0fcd095101ab7"} Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.015204 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.017865 4933 patch_prober.go:28] interesting pod/downloads-7954f5f757-v4fq8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.017943 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v4fq8" podUID="c121048e-9df5-412a-9d86-e7cf8a59d0e1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.068068 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:13 crc kubenswrapper[4933]: E1201 09:34:13.071384 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:13.571368528 +0000 UTC m=+144.213092143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.130990 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9j6m" podStartSLOduration=125.130965453 podStartE2EDuration="2m5.130965453s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:13.074209948 +0000 UTC m=+143.715933563" watchObservedRunningTime="2025-12-01 09:34:13.130965453 +0000 UTC m=+143.772689068" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.140843 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:13 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:13 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:13 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.140914 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.170550 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:13 crc kubenswrapper[4933]: E1201 09:34:13.171059 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:13.671041065 +0000 UTC m=+144.312764680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.192020 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" podStartSLOduration=125.192000334 podStartE2EDuration="2m5.192000334s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:13.189703818 +0000 UTC m=+143.831427433" watchObservedRunningTime="2025-12-01 09:34:13.192000334 +0000 UTC m=+143.833723949" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.201525 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7jpf" podStartSLOduration=125.201492619 podStartE2EDuration="2m5.201492619s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:13.130939773 +0000 UTC m=+143.772663398" watchObservedRunningTime="2025-12-01 09:34:13.201492619 +0000 UTC m=+143.843216234" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.255049 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xdrhr" podStartSLOduration=125.255027394 podStartE2EDuration="2m5.255027394s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:13.21970537 +0000 UTC m=+143.861428985" watchObservedRunningTime="2025-12-01 09:34:13.255027394 +0000 UTC m=+143.896751009" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.272292 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:13 crc kubenswrapper[4933]: E1201 09:34:13.272793 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:13.772775324 +0000 UTC m=+144.414498929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.292460 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lmgsj" podStartSLOduration=9.292437141 podStartE2EDuration="9.292437141s" podCreationTimestamp="2025-12-01 09:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:13.290049731 +0000 UTC m=+143.931773376" watchObservedRunningTime="2025-12-01 09:34:13.292437141 +0000 UTC m=+143.934160756" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.381845 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:13 crc kubenswrapper[4933]: E1201 09:34:13.389443 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:13.889412451 +0000 UTC m=+144.531136066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.409082 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-84fbz" podStartSLOduration=123.409064598 podStartE2EDuration="2m3.409064598s" podCreationTimestamp="2025-12-01 09:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:13.35621886 +0000 UTC m=+143.997942505" watchObservedRunningTime="2025-12-01 09:34:13.409064598 +0000 UTC m=+144.050788213" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.444201 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sfb5s" podStartSLOduration=125.444173367 podStartE2EDuration="2m5.444173367s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:13.441110711 +0000 UTC m=+144.082834326" watchObservedRunningTime="2025-12-01 09:34:13.444173367 +0000 UTC m=+144.085896982" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.445492 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mjkjp" podStartSLOduration=125.44548327 podStartE2EDuration="2m5.44548327s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:13.407921019 +0000 UTC m=+144.049644644" watchObservedRunningTime="2025-12-01 09:34:13.44548327 +0000 UTC m=+144.087206885" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.462334 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-2clwd" podStartSLOduration=123.462299526 podStartE2EDuration="2m3.462299526s" podCreationTimestamp="2025-12-01 09:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:13.4608672 +0000 UTC m=+144.102590815" watchObservedRunningTime="2025-12-01 09:34:13.462299526 +0000 UTC m=+144.104023141" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.488484 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:13 crc kubenswrapper[4933]: E1201 09:34:13.488971 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:13.988954096 +0000 UTC m=+144.630677711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.522729 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lzfq" podStartSLOduration=125.522696251 podStartE2EDuration="2m5.522696251s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:13.495624111 +0000 UTC m=+144.137347736" watchObservedRunningTime="2025-12-01 09:34:13.522696251 +0000 UTC m=+144.164419876" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.552439 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tg9q9" podStartSLOduration=125.552419248 podStartE2EDuration="2m5.552419248s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:13.523342387 +0000 UTC m=+144.165065992" watchObservedRunningTime="2025-12-01 09:34:13.552419248 +0000 UTC m=+144.194142863" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.554709 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5tk2j"] Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.556259 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.564523 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.566029 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" podStartSLOduration=125.566006793 podStartE2EDuration="2m5.566006793s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:13.5642445 +0000 UTC m=+144.205968115" watchObservedRunningTime="2025-12-01 09:34:13.566006793 +0000 UTC m=+144.207730408" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.576289 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5tk2j"] Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.593513 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n42jc" podStartSLOduration=125.593486304 podStartE2EDuration="2m5.593486304s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:13.590820768 +0000 UTC m=+144.232544383" watchObservedRunningTime="2025-12-01 09:34:13.593486304 +0000 UTC m=+144.235209919" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.596254 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.596517 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194f9dd3-85db-4303-ad0e-180d0e160da0-catalog-content\") pod \"certified-operators-5tk2j\" (UID: \"194f9dd3-85db-4303-ad0e-180d0e160da0\") " pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.596682 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhrfh\" (UniqueName: \"kubernetes.io/projected/194f9dd3-85db-4303-ad0e-180d0e160da0-kube-api-access-mhrfh\") pod \"certified-operators-5tk2j\" (UID: \"194f9dd3-85db-4303-ad0e-180d0e160da0\") " pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.596816 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194f9dd3-85db-4303-ad0e-180d0e160da0-utilities\") pod \"certified-operators-5tk2j\" (UID: \"194f9dd3-85db-4303-ad0e-180d0e160da0\") " pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:34:13 crc kubenswrapper[4933]: E1201 09:34:13.597130 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:14.097107814 +0000 UTC m=+144.738831429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.699658 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhrfh\" (UniqueName: \"kubernetes.io/projected/194f9dd3-85db-4303-ad0e-180d0e160da0-kube-api-access-mhrfh\") pod \"certified-operators-5tk2j\" (UID: \"194f9dd3-85db-4303-ad0e-180d0e160da0\") " pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.700462 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194f9dd3-85db-4303-ad0e-180d0e160da0-utilities\") pod \"certified-operators-5tk2j\" (UID: \"194f9dd3-85db-4303-ad0e-180d0e160da0\") " pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.700653 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194f9dd3-85db-4303-ad0e-180d0e160da0-catalog-content\") pod \"certified-operators-5tk2j\" (UID: \"194f9dd3-85db-4303-ad0e-180d0e160da0\") " pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.700814 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.700874 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194f9dd3-85db-4303-ad0e-180d0e160da0-utilities\") pod \"certified-operators-5tk2j\" (UID: \"194f9dd3-85db-4303-ad0e-180d0e160da0\") " pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.701068 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194f9dd3-85db-4303-ad0e-180d0e160da0-catalog-content\") pod \"certified-operators-5tk2j\" (UID: \"194f9dd3-85db-4303-ad0e-180d0e160da0\") " pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:34:13 crc kubenswrapper[4933]: E1201 09:34:13.701466 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:14.201451037 +0000 UTC m=+144.843174872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.723700 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n22nk" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.745013 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhrfh\" (UniqueName: \"kubernetes.io/projected/194f9dd3-85db-4303-ad0e-180d0e160da0-kube-api-access-mhrfh\") pod \"certified-operators-5tk2j\" (UID: \"194f9dd3-85db-4303-ad0e-180d0e160da0\") " pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.747501 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jlnld"] Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.752900 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.774917 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.776456 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jlnld"] Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.801798 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:13 crc kubenswrapper[4933]: E1201 09:34:13.802697 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:14.302668893 +0000 UTC m=+144.944392508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.802935 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6jsd\" (UniqueName: \"kubernetes.io/projected/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-kube-api-access-b6jsd\") pod \"community-operators-jlnld\" (UID: \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\") " pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.803043 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.803251 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-catalog-content\") pod \"community-operators-jlnld\" (UID: \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\") " pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.803414 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-utilities\") pod \"community-operators-jlnld\" (UID: \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\") " pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:34:13 crc kubenswrapper[4933]: E1201 09:34:13.803765 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:14.30375744 +0000 UTC m=+144.945481045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.904107 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:13 crc kubenswrapper[4933]: E1201 09:34:13.904199 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:14.404181906 +0000 UTC m=+145.045905521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.904262 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6jsd\" (UniqueName: \"kubernetes.io/projected/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-kube-api-access-b6jsd\") pod \"community-operators-jlnld\" (UID: \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\") " pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.904299 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.904474 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-catalog-content\") pod \"community-operators-jlnld\" (UID: \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\") " pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.904525 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-utilities\") pod \"community-operators-jlnld\" (UID: \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\") " pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:34:13 crc kubenswrapper[4933]: E1201 09:34:13.904848 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:14.404830562 +0000 UTC m=+145.046554177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.905025 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-catalog-content\") pod \"community-operators-jlnld\" (UID: \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\") " pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.905059 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-utilities\") pod \"community-operators-jlnld\" (UID: \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\") " pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.917186 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.926924 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6jsd\" (UniqueName: \"kubernetes.io/projected/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-kube-api-access-b6jsd\") pod \"community-operators-jlnld\" (UID: \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\") " pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.930887 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4gt9q"] Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.932797 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:34:13 crc kubenswrapper[4933]: I1201 09:34:13.970217 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4gt9q"] Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.005405 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.005532 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57325d2-83bc-484f-a95a-548b55435acd-utilities\") pod \"certified-operators-4gt9q\" (UID: \"e57325d2-83bc-484f-a95a-548b55435acd\") " pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.005559 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h94p7\" (UniqueName: \"kubernetes.io/projected/e57325d2-83bc-484f-a95a-548b55435acd-kube-api-access-h94p7\") pod \"certified-operators-4gt9q\" (UID: \"e57325d2-83bc-484f-a95a-548b55435acd\") " pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.005593 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57325d2-83bc-484f-a95a-548b55435acd-catalog-content\") pod \"certified-operators-4gt9q\" (UID: \"e57325d2-83bc-484f-a95a-548b55435acd\") " pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:34:14 crc kubenswrapper[4933]: E1201 09:34:14.005732 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:14.50571775 +0000 UTC m=+145.147441365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.093336 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sncl8" event={"ID":"9d7ca308-276f-4775-8828-abee226710b6","Type":"ContainerStarted","Data":"9f3e5244f73c21b40b09118a84dc9bb7eb2963ef601d0b9a21aa6b7fc6865265"} Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.101596 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.107109 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" event={"ID":"2020a25e-c390-4919-8f4f-3472caca4c14","Type":"ContainerStarted","Data":"1c30b3472a5f4669612379d399ea5c6768d15bff9ecb049c0f0801594209367c"} Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.107149 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" event={"ID":"2020a25e-c390-4919-8f4f-3472caca4c14","Type":"ContainerStarted","Data":"e43ccb01c928e7813dbb3c554903082f99b286fc2fbf5303bf4c0c032d766a68"} Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.108708 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57325d2-83bc-484f-a95a-548b55435acd-utilities\") pod \"certified-operators-4gt9q\" (UID: \"e57325d2-83bc-484f-a95a-548b55435acd\") " pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.108792 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h94p7\" (UniqueName: \"kubernetes.io/projected/e57325d2-83bc-484f-a95a-548b55435acd-kube-api-access-h94p7\") pod \"certified-operators-4gt9q\" (UID: \"e57325d2-83bc-484f-a95a-548b55435acd\") " pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.108835 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57325d2-83bc-484f-a95a-548b55435acd-catalog-content\") pod \"certified-operators-4gt9q\" (UID: \"e57325d2-83bc-484f-a95a-548b55435acd\") " pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.108863 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.126454 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57325d2-83bc-484f-a95a-548b55435acd-utilities\") pod \"certified-operators-4gt9q\" (UID: \"e57325d2-83bc-484f-a95a-548b55435acd\") " pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:34:14 crc kubenswrapper[4933]: E1201 09:34:14.139044 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:14.63900645 +0000 UTC m=+145.280730075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.140857 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-sncl8" podStartSLOduration=126.140826495 podStartE2EDuration="2m6.140826495s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:14.140123277 +0000 UTC m=+144.781846892" watchObservedRunningTime="2025-12-01 09:34:14.140826495 +0000 UTC m=+144.782550110" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.143140 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57325d2-83bc-484f-a95a-548b55435acd-catalog-content\") pod \"certified-operators-4gt9q\" (UID: \"e57325d2-83bc-484f-a95a-548b55435acd\") " pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.147681 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:14 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:14 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:14 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.147724 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.168758 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vbvt7" event={"ID":"3e4cfcb2-ffde-40ce-8934-6f63d1816e9b","Type":"ContainerStarted","Data":"75334ea410eb21e02c0d8e6825b761f8e60c4e534c3fb300a341cf7a2bc48534"} Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.179941 4933 patch_prober.go:28] interesting pod/downloads-7954f5f757-v4fq8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.180000 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v4fq8" podUID="c121048e-9df5-412a-9d86-e7cf8a59d0e1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.190894 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.197095 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h94p7\" (UniqueName: \"kubernetes.io/projected/e57325d2-83bc-484f-a95a-548b55435acd-kube-api-access-h94p7\") pod \"certified-operators-4gt9q\" (UID: \"e57325d2-83bc-484f-a95a-548b55435acd\") " pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.215553 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:14 crc kubenswrapper[4933]: E1201 09:34:14.217967 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:14.717915803 +0000 UTC m=+145.359639418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.235427 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xpldw"] Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.238765 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.243016 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xpldw"] Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.279705 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.324530 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:14 crc kubenswrapper[4933]: E1201 09:34:14.345510 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:14.845489532 +0000 UTC m=+145.487213147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.346989 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" podStartSLOduration=126.346965848 podStartE2EDuration="2m6.346965848s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:14.170554661 +0000 UTC m=+144.812278286" watchObservedRunningTime="2025-12-01 09:34:14.346965848 +0000 UTC m=+144.988689463" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.355775 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vbvt7" podStartSLOduration=126.355750436 podStartE2EDuration="2m6.355750436s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:14.209105186 +0000 UTC m=+144.850828801" watchObservedRunningTime="2025-12-01 09:34:14.355750436 +0000 UTC m=+144.997474051" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.426149 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.426486 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtlv2\" (UniqueName: \"kubernetes.io/projected/1527ab8a-0674-4959-872e-bb759c7657e1-kube-api-access-wtlv2\") pod \"community-operators-xpldw\" (UID: \"1527ab8a-0674-4959-872e-bb759c7657e1\") " pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.426557 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1527ab8a-0674-4959-872e-bb759c7657e1-catalog-content\") pod \"community-operators-xpldw\" (UID: \"1527ab8a-0674-4959-872e-bb759c7657e1\") " pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.426587 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1527ab8a-0674-4959-872e-bb759c7657e1-utilities\") pod \"community-operators-xpldw\" (UID: \"1527ab8a-0674-4959-872e-bb759c7657e1\") " pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:34:14 crc kubenswrapper[4933]: E1201 09:34:14.426770 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:14.926751164 +0000 UTC m=+145.568474779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.529143 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.529567 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtlv2\" (UniqueName: \"kubernetes.io/projected/1527ab8a-0674-4959-872e-bb759c7657e1-kube-api-access-wtlv2\") pod \"community-operators-xpldw\" (UID: \"1527ab8a-0674-4959-872e-bb759c7657e1\") " pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.529618 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1527ab8a-0674-4959-872e-bb759c7657e1-catalog-content\") pod \"community-operators-xpldw\" (UID: \"1527ab8a-0674-4959-872e-bb759c7657e1\") " pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.529642 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1527ab8a-0674-4959-872e-bb759c7657e1-utilities\") pod \"community-operators-xpldw\" (UID: \"1527ab8a-0674-4959-872e-bb759c7657e1\") " pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.530062 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1527ab8a-0674-4959-872e-bb759c7657e1-utilities\") pod \"community-operators-xpldw\" (UID: \"1527ab8a-0674-4959-872e-bb759c7657e1\") " pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:34:14 crc kubenswrapper[4933]: E1201 09:34:14.530346 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:15.030331048 +0000 UTC m=+145.672054663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.531734 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1527ab8a-0674-4959-872e-bb759c7657e1-catalog-content\") pod \"community-operators-xpldw\" (UID: \"1527ab8a-0674-4959-872e-bb759c7657e1\") " pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.552579 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5tk2j"] Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.578234 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtlv2\" (UniqueName: \"kubernetes.io/projected/1527ab8a-0674-4959-872e-bb759c7657e1-kube-api-access-wtlv2\") pod \"community-operators-xpldw\" (UID: \"1527ab8a-0674-4959-872e-bb759c7657e1\") " pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.610241 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.614015 4933 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.632927 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:14 crc kubenswrapper[4933]: E1201 09:34:14.633576 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:15.133508073 +0000 UTC m=+145.775231698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.736386 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:14 crc kubenswrapper[4933]: E1201 09:34:14.736749 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:15.236733819 +0000 UTC m=+145.878457444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.838220 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:14 crc kubenswrapper[4933]: E1201 09:34:14.839404 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:15.33938374 +0000 UTC m=+145.981107355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.878337 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jlnld"] Dec 01 09:34:14 crc kubenswrapper[4933]: I1201 09:34:14.941904 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:14 crc kubenswrapper[4933]: E1201 09:34:14.942294 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:15.442280057 +0000 UTC m=+146.084003672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:14 crc kubenswrapper[4933]: W1201 09:34:14.957069 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5de2c46e_8ecd_4bb3_b68e_9dfd7357c66b.slice/crio-fbe35d73c32857d78690449b0c4ae03bf6a1fbee86ae7bfbc866af8734818a70 WatchSource:0}: Error finding container fbe35d73c32857d78690449b0c4ae03bf6a1fbee86ae7bfbc866af8734818a70: Status 404 returned error can't find the container with id fbe35d73c32857d78690449b0c4ae03bf6a1fbee86ae7bfbc866af8734818a70 Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.003996 4933 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-01T09:34:14.614035961Z","Handler":null,"Name":""} Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.026564 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4gt9q"] Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.044288 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:15 crc kubenswrapper[4933]: E1201 09:34:15.044462 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 09:34:15.544435906 +0000 UTC m=+146.186159521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.044712 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:15 crc kubenswrapper[4933]: E1201 09:34:15.045002 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 09:34:15.54498925 +0000 UTC m=+146.186712865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-znqzs" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.119710 4933 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.119760 4933 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.128390 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:15 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:15 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:15 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.128482 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.146478 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.194085 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" event={"ID":"64c1704c-f0a1-4401-bc2d-46febb3ba534","Type":"ContainerStarted","Data":"cd5a35f887e3ace4a050363702425594207ec0c6a2a253ad77e46f80add9f536"} Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.201253 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4gt9q" event={"ID":"e57325d2-83bc-484f-a95a-548b55435acd","Type":"ContainerStarted","Data":"50cf9783da257c004354ac1cab01a5d780472e07804748dea47c1ed3b2a76aad"} Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.206970 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlnld" event={"ID":"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b","Type":"ContainerStarted","Data":"fbe35d73c32857d78690449b0c4ae03bf6a1fbee86ae7bfbc866af8734818a70"} Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.210680 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5tk2j" event={"ID":"194f9dd3-85db-4303-ad0e-180d0e160da0","Type":"ContainerStarted","Data":"981446bc66b3b20b29e680484a2d60bbbe1f91523a43ddc2790ee63826229ed5"} Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.210770 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5tk2j" event={"ID":"194f9dd3-85db-4303-ad0e-180d0e160da0","Type":"ContainerStarted","Data":"b331849fd4912bc02f801efd3e1ed42ca1d4fb163f4b69428de8f9b9e6ef8b91"} Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.212989 4933 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.283011 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.368021 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.437618 4933 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.437668 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.452945 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sh7dc" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.522761 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xpldw"] Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.533136 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ngb5q"] Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.534359 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.540925 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.553352 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngb5q"] Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.595259 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-znqzs\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.666339 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.676458 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-utilities\") pod \"redhat-marketplace-ngb5q\" (UID: \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\") " pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.676530 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-catalog-content\") pod \"redhat-marketplace-ngb5q\" (UID: \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\") " pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.676615 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdttm\" (UniqueName: \"kubernetes.io/projected/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-kube-api-access-rdttm\") pod \"redhat-marketplace-ngb5q\" (UID: \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\") " pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.681199 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.777687 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-utilities\") pod \"redhat-marketplace-ngb5q\" (UID: \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\") " pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.777995 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-catalog-content\") pod \"redhat-marketplace-ngb5q\" (UID: \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\") " pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.778062 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdttm\" (UniqueName: \"kubernetes.io/projected/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-kube-api-access-rdttm\") pod \"redhat-marketplace-ngb5q\" (UID: \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\") " pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.778600 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-utilities\") pod \"redhat-marketplace-ngb5q\" (UID: \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\") " pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.779094 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-catalog-content\") pod \"redhat-marketplace-ngb5q\" (UID: \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\") " pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.811897 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdttm\" (UniqueName: \"kubernetes.io/projected/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-kube-api-access-rdttm\") pod \"redhat-marketplace-ngb5q\" (UID: \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\") " pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.888868 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.933477 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j4cjr"] Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.937255 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:34:15 crc kubenswrapper[4933]: I1201 09:34:15.945827 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4cjr"] Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.033480 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-znqzs"] Dec 01 09:34:16 crc kubenswrapper[4933]: W1201 09:34:16.041963 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod123185e0_6f42_4a97_8107_c1e8a91d0ea9.slice/crio-a2650103232582969df238911602cf92bc241bedde968eb23e513d93835e723e WatchSource:0}: Error finding container a2650103232582969df238911602cf92bc241bedde968eb23e513d93835e723e: Status 404 returned error can't find the container with id a2650103232582969df238911602cf92bc241bedde968eb23e513d93835e723e Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.082909 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-catalog-content\") pod \"redhat-marketplace-j4cjr\" (UID: \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\") " pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.082968 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq9fr\" (UniqueName: \"kubernetes.io/projected/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-kube-api-access-zq9fr\") pod \"redhat-marketplace-j4cjr\" (UID: \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\") " pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.083068 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-utilities\") pod \"redhat-marketplace-j4cjr\" (UID: \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\") " pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.130600 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:16 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:16 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:16 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.130663 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.144608 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.145402 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.147636 4933 patch_prober.go:28] interesting pod/console-f9d7485db-x74qn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.147694 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x74qn" podUID="45bbe65f-8e73-4b73-863c-15db667e3e22" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.184361 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-catalog-content\") pod \"redhat-marketplace-j4cjr\" (UID: \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\") " pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.184409 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq9fr\" (UniqueName: \"kubernetes.io/projected/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-kube-api-access-zq9fr\") pod \"redhat-marketplace-j4cjr\" (UID: \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\") " pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.184478 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-utilities\") pod \"redhat-marketplace-j4cjr\" (UID: \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\") " pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.185293 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-utilities\") pod \"redhat-marketplace-j4cjr\" (UID: \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\") " pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.185534 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-catalog-content\") pod \"redhat-marketplace-j4cjr\" (UID: \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\") " pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.198812 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngb5q"] Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.205567 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq9fr\" (UniqueName: \"kubernetes.io/projected/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-kube-api-access-zq9fr\") pod \"redhat-marketplace-j4cjr\" (UID: \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\") " pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:34:16 crc kubenswrapper[4933]: W1201 09:34:16.208186 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda08a8024_ebc2_4e05_a6a0_ebc22bed8658.slice/crio-c10022b0cde993bc5fcca9f7d4efbbeceac024c182ec3faeffc6d4a7a60550c7 WatchSource:0}: Error finding container c10022b0cde993bc5fcca9f7d4efbbeceac024c182ec3faeffc6d4a7a60550c7: Status 404 returned error can't find the container with id c10022b0cde993bc5fcca9f7d4efbbeceac024c182ec3faeffc6d4a7a60550c7 Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.219168 4933 generic.go:334] "Generic (PLEG): container finished" podID="194f9dd3-85db-4303-ad0e-180d0e160da0" containerID="981446bc66b3b20b29e680484a2d60bbbe1f91523a43ddc2790ee63826229ed5" exitCode=0 Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.219225 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5tk2j" event={"ID":"194f9dd3-85db-4303-ad0e-180d0e160da0","Type":"ContainerDied","Data":"981446bc66b3b20b29e680484a2d60bbbe1f91523a43ddc2790ee63826229ed5"} Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.221594 4933 generic.go:334] "Generic (PLEG): container finished" podID="1527ab8a-0674-4959-872e-bb759c7657e1" containerID="f7222373b409b4111d32dc6fa28172c33cda650a0eaeb0e86aa23b1f274a8da5" exitCode=0 Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.221713 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpldw" event={"ID":"1527ab8a-0674-4959-872e-bb759c7657e1","Type":"ContainerDied","Data":"f7222373b409b4111d32dc6fa28172c33cda650a0eaeb0e86aa23b1f274a8da5"} Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.221746 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpldw" event={"ID":"1527ab8a-0674-4959-872e-bb759c7657e1","Type":"ContainerStarted","Data":"0317e139561546c77bf7d9c99c680fd4f0cb20d79d274b8692b97dc5ef99c974"} Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.223198 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" event={"ID":"123185e0-6f42-4a97-8107-c1e8a91d0ea9","Type":"ContainerStarted","Data":"5c5a83f93c551897a0082d1d7aaf46e2b06f9d084ce46bb8f71431d99444558b"} Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.223223 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" event={"ID":"123185e0-6f42-4a97-8107-c1e8a91d0ea9","Type":"ContainerStarted","Data":"a2650103232582969df238911602cf92bc241bedde968eb23e513d93835e723e"} Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.223565 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.226503 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" event={"ID":"64c1704c-f0a1-4401-bc2d-46febb3ba534","Type":"ContainerStarted","Data":"bbeaa415bed60de073a6cf33d5ac1c2a718235fb9964c399734c032b075b6f05"} Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.226539 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" event={"ID":"64c1704c-f0a1-4401-bc2d-46febb3ba534","Type":"ContainerStarted","Data":"ef43615c0b5c3d67ac5c3a78a8cbc3ea1a27a93f7a46367ea1c8de22f7f04c78"} Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.227912 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngb5q" event={"ID":"a08a8024-ebc2-4e05-a6a0-ebc22bed8658","Type":"ContainerStarted","Data":"c10022b0cde993bc5fcca9f7d4efbbeceac024c182ec3faeffc6d4a7a60550c7"} Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.234642 4933 generic.go:334] "Generic (PLEG): container finished" podID="e57325d2-83bc-484f-a95a-548b55435acd" containerID="ad0fc573890e5d239c1fdb34221ecd4bb6afc1baf239f946a96f7c04e67574b1" exitCode=0 Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.234725 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4gt9q" event={"ID":"e57325d2-83bc-484f-a95a-548b55435acd","Type":"ContainerDied","Data":"ad0fc573890e5d239c1fdb34221ecd4bb6afc1baf239f946a96f7c04e67574b1"} Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.237252 4933 generic.go:334] "Generic (PLEG): container finished" podID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" containerID="652ee23bd10b140c3198be615bab0552422ca4d67eed9fdd462dff08a6a4f836" exitCode=0 Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.238266 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlnld" event={"ID":"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b","Type":"ContainerDied","Data":"652ee23bd10b140c3198be615bab0552422ca4d67eed9fdd462dff08a6a4f836"} Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.271074 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" podStartSLOduration=12.271054644 podStartE2EDuration="12.271054644s" podCreationTimestamp="2025-12-01 09:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:16.268734187 +0000 UTC m=+146.910457802" watchObservedRunningTime="2025-12-01 09:34:16.271054644 +0000 UTC m=+146.912778249" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.274168 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.342144 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" podStartSLOduration=128.342124894 podStartE2EDuration="2m8.342124894s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:16.339960141 +0000 UTC m=+146.981683776" watchObservedRunningTime="2025-12-01 09:34:16.342124894 +0000 UTC m=+146.983848529" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.366848 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.371735 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.381756 4933 patch_prober.go:28] interesting pod/apiserver-76f77b778f-dtcjv container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 01 09:34:16 crc kubenswrapper[4933]: [+]log ok Dec 01 09:34:16 crc kubenswrapper[4933]: [+]etcd ok Dec 01 09:34:16 crc kubenswrapper[4933]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 01 09:34:16 crc kubenswrapper[4933]: [+]poststarthook/generic-apiserver-start-informers ok Dec 01 09:34:16 crc kubenswrapper[4933]: [+]poststarthook/max-in-flight-filter ok Dec 01 09:34:16 crc kubenswrapper[4933]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 01 09:34:16 crc kubenswrapper[4933]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 01 09:34:16 crc kubenswrapper[4933]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 01 09:34:16 crc kubenswrapper[4933]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 01 09:34:16 crc kubenswrapper[4933]: [+]poststarthook/project.openshift.io-projectcache ok Dec 01 09:34:16 crc kubenswrapper[4933]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 01 09:34:16 crc kubenswrapper[4933]: [+]poststarthook/openshift.io-startinformers ok Dec 01 09:34:16 crc kubenswrapper[4933]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 01 09:34:16 crc kubenswrapper[4933]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 01 09:34:16 crc kubenswrapper[4933]: livez check failed Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.381824 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" podUID="2020a25e-c390-4919-8f4f-3472caca4c14" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.384899 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.430916 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nwhhr" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.583161 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4cjr"] Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.588983 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.589014 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:16 crc kubenswrapper[4933]: W1201 09:34:16.590105 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bb6d65b_3b0f_4d8f_83be_242ed7b0807c.slice/crio-5278b130fac54ba64d9add799dbbbc3be3407630474f7242566b39612c37fede WatchSource:0}: Error finding container 5278b130fac54ba64d9add799dbbbc3be3407630474f7242566b39612c37fede: Status 404 returned error can't find the container with id 5278b130fac54ba64d9add799dbbbc3be3407630474f7242566b39612c37fede Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.599209 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.786189 4933 patch_prober.go:28] interesting pod/downloads-7954f5f757-v4fq8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.786213 4933 patch_prober.go:28] interesting pod/downloads-7954f5f757-v4fq8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.786290 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-v4fq8" podUID="c121048e-9df5-412a-9d86-e7cf8a59d0e1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.786355 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v4fq8" podUID="c121048e-9df5-412a-9d86-e7cf8a59d0e1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.926169 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b8gph"] Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.927370 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.929877 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 09:34:16 crc kubenswrapper[4933]: I1201 09:34:16.941179 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8gph"] Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.008624 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjx5q\" (UniqueName: \"kubernetes.io/projected/c1f2e651-74da-4f9c-9294-c2d45830b676-kube-api-access-sjx5q\") pod \"redhat-operators-b8gph\" (UID: \"c1f2e651-74da-4f9c-9294-c2d45830b676\") " pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.008703 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1f2e651-74da-4f9c-9294-c2d45830b676-catalog-content\") pod \"redhat-operators-b8gph\" (UID: \"c1f2e651-74da-4f9c-9294-c2d45830b676\") " pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.008788 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1f2e651-74da-4f9c-9294-c2d45830b676-utilities\") pod \"redhat-operators-b8gph\" (UID: \"c1f2e651-74da-4f9c-9294-c2d45830b676\") " pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.110108 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1f2e651-74da-4f9c-9294-c2d45830b676-utilities\") pod \"redhat-operators-b8gph\" (UID: \"c1f2e651-74da-4f9c-9294-c2d45830b676\") " pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.110552 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjx5q\" (UniqueName: \"kubernetes.io/projected/c1f2e651-74da-4f9c-9294-c2d45830b676-kube-api-access-sjx5q\") pod \"redhat-operators-b8gph\" (UID: \"c1f2e651-74da-4f9c-9294-c2d45830b676\") " pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.110586 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1f2e651-74da-4f9c-9294-c2d45830b676-catalog-content\") pod \"redhat-operators-b8gph\" (UID: \"c1f2e651-74da-4f9c-9294-c2d45830b676\") " pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.111078 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1f2e651-74da-4f9c-9294-c2d45830b676-catalog-content\") pod \"redhat-operators-b8gph\" (UID: \"c1f2e651-74da-4f9c-9294-c2d45830b676\") " pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.111429 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1f2e651-74da-4f9c-9294-c2d45830b676-utilities\") pod \"redhat-operators-b8gph\" (UID: \"c1f2e651-74da-4f9c-9294-c2d45830b676\") " pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.121715 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.126108 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:17 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:17 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:17 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.126157 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.133562 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjx5q\" (UniqueName: \"kubernetes.io/projected/c1f2e651-74da-4f9c-9294-c2d45830b676-kube-api-access-sjx5q\") pod \"redhat-operators-b8gph\" (UID: \"c1f2e651-74da-4f9c-9294-c2d45830b676\") " pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.246101 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.250520 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4cjr" event={"ID":"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c","Type":"ContainerStarted","Data":"5278b130fac54ba64d9add799dbbbc3be3407630474f7242566b39612c37fede"} Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.256694 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6qdvh" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.313223 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.313576 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.313734 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.314551 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.317697 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.318870 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.320592 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.325795 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pjpdf"] Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.327287 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.337016 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pjpdf"] Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.420993 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/447e5be4-e974-45c7-a58a-2efddd4bd49c-utilities\") pod \"redhat-operators-pjpdf\" (UID: \"447e5be4-e974-45c7-a58a-2efddd4bd49c\") " pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.421042 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt2mq\" (UniqueName: \"kubernetes.io/projected/447e5be4-e974-45c7-a58a-2efddd4bd49c-kube-api-access-kt2mq\") pod \"redhat-operators-pjpdf\" (UID: \"447e5be4-e974-45c7-a58a-2efddd4bd49c\") " pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.421154 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/447e5be4-e974-45c7-a58a-2efddd4bd49c-catalog-content\") pod \"redhat-operators-pjpdf\" (UID: \"447e5be4-e974-45c7-a58a-2efddd4bd49c\") " pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.490726 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8gph"] Dec 01 09:34:17 crc kubenswrapper[4933]: W1201 09:34:17.495517 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1f2e651_74da_4f9c_9294_c2d45830b676.slice/crio-5d7bd0bef41b31e215b5852af435027a745e1d216ed6aa77c691f96dfcbed963 WatchSource:0}: Error finding container 5d7bd0bef41b31e215b5852af435027a745e1d216ed6aa77c691f96dfcbed963: Status 404 returned error can't find the container with id 5d7bd0bef41b31e215b5852af435027a745e1d216ed6aa77c691f96dfcbed963 Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.522231 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/447e5be4-e974-45c7-a58a-2efddd4bd49c-utilities\") pod \"redhat-operators-pjpdf\" (UID: \"447e5be4-e974-45c7-a58a-2efddd4bd49c\") " pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.522729 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt2mq\" (UniqueName: \"kubernetes.io/projected/447e5be4-e974-45c7-a58a-2efddd4bd49c-kube-api-access-kt2mq\") pod \"redhat-operators-pjpdf\" (UID: \"447e5be4-e974-45c7-a58a-2efddd4bd49c\") " pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.522793 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/447e5be4-e974-45c7-a58a-2efddd4bd49c-catalog-content\") pod \"redhat-operators-pjpdf\" (UID: \"447e5be4-e974-45c7-a58a-2efddd4bd49c\") " pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.523320 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/447e5be4-e974-45c7-a58a-2efddd4bd49c-utilities\") pod \"redhat-operators-pjpdf\" (UID: \"447e5be4-e974-45c7-a58a-2efddd4bd49c\") " pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.523328 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/447e5be4-e974-45c7-a58a-2efddd4bd49c-catalog-content\") pod \"redhat-operators-pjpdf\" (UID: \"447e5be4-e974-45c7-a58a-2efddd4bd49c\") " pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.540400 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt2mq\" (UniqueName: \"kubernetes.io/projected/447e5be4-e974-45c7-a58a-2efddd4bd49c-kube-api-access-kt2mq\") pod \"redhat-operators-pjpdf\" (UID: \"447e5be4-e974-45c7-a58a-2efddd4bd49c\") " pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.592848 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.599130 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.662684 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.669487 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:34:17 crc kubenswrapper[4933]: I1201 09:34:17.886320 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.096130 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.097104 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.101928 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.102192 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.103678 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.128268 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:18 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:18 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:18 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.128351 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.218063 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pjpdf"] Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.243339 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a98312ce-1864-4e25-a331-251d0b430648-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a98312ce-1864-4e25-a331-251d0b430648\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.243562 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a98312ce-1864-4e25-a331-251d0b430648-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a98312ce-1864-4e25-a331-251d0b430648\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.269178 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8gph" event={"ID":"c1f2e651-74da-4f9c-9294-c2d45830b676","Type":"ContainerStarted","Data":"5d7bd0bef41b31e215b5852af435027a745e1d216ed6aa77c691f96dfcbed963"} Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.272812 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e9effdec0450009f85cc0f27fde8cd576e86e6a7376e2797a2fd14fcd963fc13"} Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.275937 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngb5q" event={"ID":"a08a8024-ebc2-4e05-a6a0-ebc22bed8658","Type":"ContainerStarted","Data":"2ae67593b03352d9b1881b6ff881c605afac2082cf75dd88e7e71375ea413121"} Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.277555 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjpdf" event={"ID":"447e5be4-e974-45c7-a58a-2efddd4bd49c","Type":"ContainerStarted","Data":"62cc5fd668b5c7f32f62c30623ea820158d9da8866368d789c685705bfe28549"} Dec 01 09:34:18 crc kubenswrapper[4933]: W1201 09:34:18.283841 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-72a8cbb14e809ea89bf2a63fc3c7f5b5d760db800994a9dd7b978ad991102494 WatchSource:0}: Error finding container 72a8cbb14e809ea89bf2a63fc3c7f5b5d760db800994a9dd7b978ad991102494: Status 404 returned error can't find the container with id 72a8cbb14e809ea89bf2a63fc3c7f5b5d760db800994a9dd7b978ad991102494 Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.344770 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a98312ce-1864-4e25-a331-251d0b430648-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a98312ce-1864-4e25-a331-251d0b430648\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.344885 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a98312ce-1864-4e25-a331-251d0b430648-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a98312ce-1864-4e25-a331-251d0b430648\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.345016 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a98312ce-1864-4e25-a331-251d0b430648-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a98312ce-1864-4e25-a331-251d0b430648\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.371867 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a98312ce-1864-4e25-a331-251d0b430648-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a98312ce-1864-4e25-a331-251d0b430648\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.377154 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.378008 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.380229 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.386739 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.392439 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.427485 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.549182 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fa915f5-8371-4df3-b019-ff9b17cae77e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9fa915f5-8371-4df3-b019-ff9b17cae77e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.549660 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fa915f5-8371-4df3-b019-ff9b17cae77e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9fa915f5-8371-4df3-b019-ff9b17cae77e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.633276 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 09:34:18 crc kubenswrapper[4933]: W1201 09:34:18.646610 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda98312ce_1864_4e25_a331_251d0b430648.slice/crio-8ec36bfd892dee8485e3ccfec539321ccddc2023eabc864f71de4c6a02c23919 WatchSource:0}: Error finding container 8ec36bfd892dee8485e3ccfec539321ccddc2023eabc864f71de4c6a02c23919: Status 404 returned error can't find the container with id 8ec36bfd892dee8485e3ccfec539321ccddc2023eabc864f71de4c6a02c23919 Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.650851 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fa915f5-8371-4df3-b019-ff9b17cae77e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9fa915f5-8371-4df3-b019-ff9b17cae77e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.650960 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fa915f5-8371-4df3-b019-ff9b17cae77e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9fa915f5-8371-4df3-b019-ff9b17cae77e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.650967 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fa915f5-8371-4df3-b019-ff9b17cae77e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9fa915f5-8371-4df3-b019-ff9b17cae77e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.676671 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fa915f5-8371-4df3-b019-ff9b17cae77e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9fa915f5-8371-4df3-b019-ff9b17cae77e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 09:34:18 crc kubenswrapper[4933]: I1201 09:34:18.724177 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.137145 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:19 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:19 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:19 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.137827 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.338093 4933 generic.go:334] "Generic (PLEG): container finished" podID="4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" containerID="be933f3c49a4be32c90954fd9fdf4c6b6ddb99741c9bcebec10839ce27a0a640" exitCode=0 Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.338256 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4cjr" event={"ID":"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c","Type":"ContainerDied","Data":"be933f3c49a4be32c90954fd9fdf4c6b6ddb99741c9bcebec10839ce27a0a640"} Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.365040 4933 generic.go:334] "Generic (PLEG): container finished" podID="c1f2e651-74da-4f9c-9294-c2d45830b676" containerID="3cede0bf4cabf888a9bd44f0ee3514aca4d7a5f50aa450ab392bc92af7f98342" exitCode=0 Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.365152 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8gph" event={"ID":"c1f2e651-74da-4f9c-9294-c2d45830b676","Type":"ContainerDied","Data":"3cede0bf4cabf888a9bd44f0ee3514aca4d7a5f50aa450ab392bc92af7f98342"} Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.381989 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a98312ce-1864-4e25-a331-251d0b430648","Type":"ContainerStarted","Data":"8ec36bfd892dee8485e3ccfec539321ccddc2023eabc864f71de4c6a02c23919"} Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.401931 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7c90f1c3cfb7b8501deaa5961645d856394b92a44440e0b7a24c3d4dd7062209"} Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.405964 4933 generic.go:334] "Generic (PLEG): container finished" podID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" containerID="2ae67593b03352d9b1881b6ff881c605afac2082cf75dd88e7e71375ea413121" exitCode=0 Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.406034 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngb5q" event={"ID":"a08a8024-ebc2-4e05-a6a0-ebc22bed8658","Type":"ContainerDied","Data":"2ae67593b03352d9b1881b6ff881c605afac2082cf75dd88e7e71375ea413121"} Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.411357 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.417608 4933 generic.go:334] "Generic (PLEG): container finished" podID="a3422f33-b5ab-4658-86a0-c908efca7db9" containerID="a48d1ad0a3c4f2dc67a0ef25a006868e41fb607592f16bc6a8234a203356793e" exitCode=0 Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.417693 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" event={"ID":"a3422f33-b5ab-4658-86a0-c908efca7db9","Type":"ContainerDied","Data":"a48d1ad0a3c4f2dc67a0ef25a006868e41fb607592f16bc6a8234a203356793e"} Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.420483 4933 generic.go:334] "Generic (PLEG): container finished" podID="447e5be4-e974-45c7-a58a-2efddd4bd49c" containerID="5110bfbf6564a5fcf28396a6f0b1c7603f7022d80bce6355980e28b6b89d99f3" exitCode=0 Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.420538 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjpdf" event={"ID":"447e5be4-e974-45c7-a58a-2efddd4bd49c","Type":"ContainerDied","Data":"5110bfbf6564a5fcf28396a6f0b1c7603f7022d80bce6355980e28b6b89d99f3"} Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.433562 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"853a7cfb585dd0ce8c4c7753f156dee25de0e3d0216a7cfbee234ee6262ec619"} Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.433606 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"72a8cbb14e809ea89bf2a63fc3c7f5b5d760db800994a9dd7b978ad991102494"} Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.441223 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.472042 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"72a20d398b3b0fe47bec624d5c1cd3643cdd8a5ff5d21b032ceff87318bca45f"} Dec 01 09:34:19 crc kubenswrapper[4933]: I1201 09:34:19.472094 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"02a276686077884e138b4270c34043f76ccc3cda53f93ad93ebbbc11c6b6f3da"} Dec 01 09:34:20 crc kubenswrapper[4933]: I1201 09:34:20.126182 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:20 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:20 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:20 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:20 crc kubenswrapper[4933]: I1201 09:34:20.126611 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:20 crc kubenswrapper[4933]: I1201 09:34:20.497990 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a98312ce-1864-4e25-a331-251d0b430648","Type":"ContainerStarted","Data":"357c6a3b52853aa21f00b8ed1dd25648e86944d086581a8a977b2daaaf865dde"} Dec 01 09:34:20 crc kubenswrapper[4933]: I1201 09:34:20.502239 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9fa915f5-8371-4df3-b019-ff9b17cae77e","Type":"ContainerStarted","Data":"f8e88b3ea00aef494bda488c75f42661c263e3875fb6d34d50be56cb13bc4c97"} Dec 01 09:34:20 crc kubenswrapper[4933]: I1201 09:34:20.502297 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9fa915f5-8371-4df3-b019-ff9b17cae77e","Type":"ContainerStarted","Data":"1a8e4183c23a9896b6fd310804e9e4dc242a8cc7f6b69a533dfa0ab949ebd524"} Dec 01 09:34:20 crc kubenswrapper[4933]: I1201 09:34:20.523876 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.523855244 podStartE2EDuration="2.523855244s" podCreationTimestamp="2025-12-01 09:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:20.52125477 +0000 UTC m=+151.162978395" watchObservedRunningTime="2025-12-01 09:34:20.523855244 +0000 UTC m=+151.165578859" Dec 01 09:34:20 crc kubenswrapper[4933]: I1201 09:34:20.546136 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.546113395 podStartE2EDuration="2.546113395s" podCreationTimestamp="2025-12-01 09:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:20.543291685 +0000 UTC m=+151.185015300" watchObservedRunningTime="2025-12-01 09:34:20.546113395 +0000 UTC m=+151.187837010" Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.106255 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.146627 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:21 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:21 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:21 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.146708 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.276251 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3422f33-b5ab-4658-86a0-c908efca7db9-config-volume\") pod \"a3422f33-b5ab-4658-86a0-c908efca7db9\" (UID: \"a3422f33-b5ab-4658-86a0-c908efca7db9\") " Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.276491 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv9w2\" (UniqueName: \"kubernetes.io/projected/a3422f33-b5ab-4658-86a0-c908efca7db9-kube-api-access-jv9w2\") pod \"a3422f33-b5ab-4658-86a0-c908efca7db9\" (UID: \"a3422f33-b5ab-4658-86a0-c908efca7db9\") " Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.276554 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3422f33-b5ab-4658-86a0-c908efca7db9-secret-volume\") pod \"a3422f33-b5ab-4658-86a0-c908efca7db9\" (UID: \"a3422f33-b5ab-4658-86a0-c908efca7db9\") " Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.295440 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3422f33-b5ab-4658-86a0-c908efca7db9-config-volume" (OuterVolumeSpecName: "config-volume") pod "a3422f33-b5ab-4658-86a0-c908efca7db9" (UID: "a3422f33-b5ab-4658-86a0-c908efca7db9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.295459 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3422f33-b5ab-4658-86a0-c908efca7db9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a3422f33-b5ab-4658-86a0-c908efca7db9" (UID: "a3422f33-b5ab-4658-86a0-c908efca7db9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.313647 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3422f33-b5ab-4658-86a0-c908efca7db9-kube-api-access-jv9w2" (OuterVolumeSpecName: "kube-api-access-jv9w2") pod "a3422f33-b5ab-4658-86a0-c908efca7db9" (UID: "a3422f33-b5ab-4658-86a0-c908efca7db9"). InnerVolumeSpecName "kube-api-access-jv9w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.376189 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.377977 4933 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3422f33-b5ab-4658-86a0-c908efca7db9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.378017 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv9w2\" (UniqueName: \"kubernetes.io/projected/a3422f33-b5ab-4658-86a0-c908efca7db9-kube-api-access-jv9w2\") on node \"crc\" DevicePath \"\"" Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.378031 4933 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3422f33-b5ab-4658-86a0-c908efca7db9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.382946 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-dtcjv" Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.559735 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.561138 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l" event={"ID":"a3422f33-b5ab-4658-86a0-c908efca7db9","Type":"ContainerDied","Data":"8b270b6914b313d05c1ce07f1d1af2245806be54cb7bde2c5d40f9715e90daac"} Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.561216 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b270b6914b313d05c1ce07f1d1af2245806be54cb7bde2c5d40f9715e90daac" Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.570290 4933 generic.go:334] "Generic (PLEG): container finished" podID="a98312ce-1864-4e25-a331-251d0b430648" containerID="357c6a3b52853aa21f00b8ed1dd25648e86944d086581a8a977b2daaaf865dde" exitCode=0 Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.570444 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a98312ce-1864-4e25-a331-251d0b430648","Type":"ContainerDied","Data":"357c6a3b52853aa21f00b8ed1dd25648e86944d086581a8a977b2daaaf865dde"} Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.585498 4933 generic.go:334] "Generic (PLEG): container finished" podID="9fa915f5-8371-4df3-b019-ff9b17cae77e" containerID="f8e88b3ea00aef494bda488c75f42661c263e3875fb6d34d50be56cb13bc4c97" exitCode=0 Dec 01 09:34:21 crc kubenswrapper[4933]: I1201 09:34:21.586401 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9fa915f5-8371-4df3-b019-ff9b17cae77e","Type":"ContainerDied","Data":"f8e88b3ea00aef494bda488c75f42661c263e3875fb6d34d50be56cb13bc4c97"} Dec 01 09:34:22 crc kubenswrapper[4933]: I1201 09:34:22.127916 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:22 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:22 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:22 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:22 crc kubenswrapper[4933]: I1201 09:34:22.128417 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:22 crc kubenswrapper[4933]: I1201 09:34:22.148940 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lmgsj" Dec 01 09:34:22 crc kubenswrapper[4933]: I1201 09:34:22.979455 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.071193 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.124922 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:23 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:23 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:23 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.124999 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.178514 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a98312ce-1864-4e25-a331-251d0b430648-kubelet-dir\") pod \"a98312ce-1864-4e25-a331-251d0b430648\" (UID: \"a98312ce-1864-4e25-a331-251d0b430648\") " Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.178620 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a98312ce-1864-4e25-a331-251d0b430648-kube-api-access\") pod \"a98312ce-1864-4e25-a331-251d0b430648\" (UID: \"a98312ce-1864-4e25-a331-251d0b430648\") " Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.178788 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fa915f5-8371-4df3-b019-ff9b17cae77e-kubelet-dir\") pod \"9fa915f5-8371-4df3-b019-ff9b17cae77e\" (UID: \"9fa915f5-8371-4df3-b019-ff9b17cae77e\") " Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.178780 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a98312ce-1864-4e25-a331-251d0b430648-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a98312ce-1864-4e25-a331-251d0b430648" (UID: "a98312ce-1864-4e25-a331-251d0b430648"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.178877 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fa915f5-8371-4df3-b019-ff9b17cae77e-kube-api-access\") pod \"9fa915f5-8371-4df3-b019-ff9b17cae77e\" (UID: \"9fa915f5-8371-4df3-b019-ff9b17cae77e\") " Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.179351 4933 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a98312ce-1864-4e25-a331-251d0b430648-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.179873 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fa915f5-8371-4df3-b019-ff9b17cae77e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9fa915f5-8371-4df3-b019-ff9b17cae77e" (UID: "9fa915f5-8371-4df3-b019-ff9b17cae77e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.207905 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa915f5-8371-4df3-b019-ff9b17cae77e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9fa915f5-8371-4df3-b019-ff9b17cae77e" (UID: "9fa915f5-8371-4df3-b019-ff9b17cae77e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.209135 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a98312ce-1864-4e25-a331-251d0b430648-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a98312ce-1864-4e25-a331-251d0b430648" (UID: "a98312ce-1864-4e25-a331-251d0b430648"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.281239 4933 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fa915f5-8371-4df3-b019-ff9b17cae77e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.281283 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fa915f5-8371-4df3-b019-ff9b17cae77e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.281319 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a98312ce-1864-4e25-a331-251d0b430648-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.629705 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a98312ce-1864-4e25-a331-251d0b430648","Type":"ContainerDied","Data":"8ec36bfd892dee8485e3ccfec539321ccddc2023eabc864f71de4c6a02c23919"} Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.630197 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ec36bfd892dee8485e3ccfec539321ccddc2023eabc864f71de4c6a02c23919" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.630195 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.632418 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9fa915f5-8371-4df3-b019-ff9b17cae77e","Type":"ContainerDied","Data":"1a8e4183c23a9896b6fd310804e9e4dc242a8cc7f6b69a533dfa0ab949ebd524"} Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.632461 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a8e4183c23a9896b6fd310804e9e4dc242a8cc7f6b69a533dfa0ab949ebd524" Dec 01 09:34:23 crc kubenswrapper[4933]: I1201 09:34:23.632542 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 09:34:24 crc kubenswrapper[4933]: I1201 09:34:24.125820 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:24 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:24 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:24 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:24 crc kubenswrapper[4933]: I1201 09:34:24.125881 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:25 crc kubenswrapper[4933]: I1201 09:34:25.125354 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:25 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:25 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:25 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:25 crc kubenswrapper[4933]: I1201 09:34:25.125413 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:26 crc kubenswrapper[4933]: I1201 09:34:26.125145 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:26 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:26 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:26 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:26 crc kubenswrapper[4933]: I1201 09:34:26.125214 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:26 crc kubenswrapper[4933]: I1201 09:34:26.148809 4933 patch_prober.go:28] interesting pod/console-f9d7485db-x74qn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 01 09:34:26 crc kubenswrapper[4933]: I1201 09:34:26.148911 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x74qn" podUID="45bbe65f-8e73-4b73-863c-15db667e3e22" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 01 09:34:26 crc kubenswrapper[4933]: I1201 09:34:26.814400 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-v4fq8" Dec 01 09:34:27 crc kubenswrapper[4933]: I1201 09:34:27.125684 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:27 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:27 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:27 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:27 crc kubenswrapper[4933]: I1201 09:34:27.125751 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:28 crc kubenswrapper[4933]: I1201 09:34:28.126053 4933 patch_prober.go:28] interesting pod/router-default-5444994796-wnvhn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 09:34:28 crc kubenswrapper[4933]: [-]has-synced failed: reason withheld Dec 01 09:34:28 crc kubenswrapper[4933]: [+]process-running ok Dec 01 09:34:28 crc kubenswrapper[4933]: healthz check failed Dec 01 09:34:28 crc kubenswrapper[4933]: I1201 09:34:28.126170 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wnvhn" podUID="e360f8b0-0f1b-4a9b-9aed-cd0a8976482a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:34:29 crc kubenswrapper[4933]: I1201 09:34:29.132441 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:29 crc kubenswrapper[4933]: I1201 09:34:29.135870 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wnvhn" Dec 01 09:34:30 crc kubenswrapper[4933]: I1201 09:34:30.449683 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:34:30 crc kubenswrapper[4933]: I1201 09:34:30.458260 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e67470a-b3fe-4176-b546-fdf28012fce5-metrics-certs\") pod \"network-metrics-daemon-bcqz5\" (UID: \"9e67470a-b3fe-4176-b546-fdf28012fce5\") " pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:34:30 crc kubenswrapper[4933]: I1201 09:34:30.685360 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bcqz5" Dec 01 09:34:35 crc kubenswrapper[4933]: I1201 09:34:35.675751 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:34:36 crc kubenswrapper[4933]: I1201 09:34:36.233377 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:36 crc kubenswrapper[4933]: I1201 09:34:36.237741 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:34:41 crc kubenswrapper[4933]: I1201 09:34:41.741117 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:34:41 crc kubenswrapper[4933]: I1201 09:34:41.742091 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:34:47 crc kubenswrapper[4933]: I1201 09:34:47.127661 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x82fl" Dec 01 09:34:49 crc kubenswrapper[4933]: I1201 09:34:49.141729 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bcqz5"] Dec 01 09:34:51 crc kubenswrapper[4933]: E1201 09:34:51.962836 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 01 09:34:51 crc kubenswrapper[4933]: E1201 09:34:51.963883 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6jsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jlnld_openshift-marketplace(5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 09:34:51 crc kubenswrapper[4933]: E1201 09:34:51.965131 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jlnld" podUID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.644885 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 09:34:52 crc kubenswrapper[4933]: E1201 09:34:52.645532 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a98312ce-1864-4e25-a331-251d0b430648" containerName="pruner" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.645543 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98312ce-1864-4e25-a331-251d0b430648" containerName="pruner" Dec 01 09:34:52 crc kubenswrapper[4933]: E1201 09:34:52.645556 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa915f5-8371-4df3-b019-ff9b17cae77e" containerName="pruner" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.645562 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa915f5-8371-4df3-b019-ff9b17cae77e" containerName="pruner" Dec 01 09:34:52 crc kubenswrapper[4933]: E1201 09:34:52.645573 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3422f33-b5ab-4658-86a0-c908efca7db9" containerName="collect-profiles" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.645581 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3422f33-b5ab-4658-86a0-c908efca7db9" containerName="collect-profiles" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.645692 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3422f33-b5ab-4658-86a0-c908efca7db9" containerName="collect-profiles" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.645707 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="a98312ce-1864-4e25-a331-251d0b430648" containerName="pruner" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.645717 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa915f5-8371-4df3-b019-ff9b17cae77e" containerName="pruner" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.646153 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.653322 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.653901 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.659049 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.780716 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c2108a8-ede0-4da7-b73d-82838aac033b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7c2108a8-ede0-4da7-b73d-82838aac033b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.780793 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c2108a8-ede0-4da7-b73d-82838aac033b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7c2108a8-ede0-4da7-b73d-82838aac033b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.882497 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c2108a8-ede0-4da7-b73d-82838aac033b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7c2108a8-ede0-4da7-b73d-82838aac033b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.882602 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c2108a8-ede0-4da7-b73d-82838aac033b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7c2108a8-ede0-4da7-b73d-82838aac033b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.882966 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c2108a8-ede0-4da7-b73d-82838aac033b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7c2108a8-ede0-4da7-b73d-82838aac033b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.902565 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c2108a8-ede0-4da7-b73d-82838aac033b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7c2108a8-ede0-4da7-b73d-82838aac033b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 09:34:52 crc kubenswrapper[4933]: I1201 09:34:52.982630 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 09:34:53 crc kubenswrapper[4933]: E1201 09:34:53.082191 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 01 09:34:53 crc kubenswrapper[4933]: E1201 09:34:53.082399 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdttm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ngb5q_openshift-marketplace(a08a8024-ebc2-4e05-a6a0-ebc22bed8658): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 09:34:53 crc kubenswrapper[4933]: E1201 09:34:53.083711 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ngb5q" podUID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" Dec 01 09:34:53 crc kubenswrapper[4933]: E1201 09:34:53.115296 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 01 09:34:53 crc kubenswrapper[4933]: E1201 09:34:53.115488 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhrfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5tk2j_openshift-marketplace(194f9dd3-85db-4303-ad0e-180d0e160da0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 09:34:53 crc kubenswrapper[4933]: E1201 09:34:53.116657 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5tk2j" podUID="194f9dd3-85db-4303-ad0e-180d0e160da0" Dec 01 09:34:56 crc kubenswrapper[4933]: I1201 09:34:56.434323 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4wqht"] Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.054336 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5tk2j" podUID="194f9dd3-85db-4303-ad0e-180d0e160da0" Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.054453 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jlnld" podUID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.054681 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ngb5q" podUID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" Dec 01 09:34:57 crc kubenswrapper[4933]: W1201 09:34:57.056826 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e67470a_b3fe_4176_b546_fdf28012fce5.slice/crio-f1fa93dd6515d10089bc54654b3b13a2f9035047091dcf75bf6fafd397d1365b WatchSource:0}: Error finding container f1fa93dd6515d10089bc54654b3b13a2f9035047091dcf75bf6fafd397d1365b: Status 404 returned error can't find the container with id f1fa93dd6515d10089bc54654b3b13a2f9035047091dcf75bf6fafd397d1365b Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.133962 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.134149 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjx5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-b8gph_openshift-marketplace(c1f2e651-74da-4f9c-9294-c2d45830b676): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.135642 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-b8gph" podUID="c1f2e651-74da-4f9c-9294-c2d45830b676" Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.149921 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.150115 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtlv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xpldw_openshift-marketplace(1527ab8a-0674-4959-872e-bb759c7657e1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.151516 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xpldw" podUID="1527ab8a-0674-4959-872e-bb759c7657e1" Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.240300 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.241262 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.244614 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.244822 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kt2mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pjpdf_openshift-marketplace(447e5be4-e974-45c7-a58a-2efddd4bd49c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.247171 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pjpdf" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.249531 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.353483 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89266d5c-bcba-413a-9ab0-0901d25cd528-kube-api-access\") pod \"installer-9-crc\" (UID: \"89266d5c-bcba-413a-9ab0-0901d25cd528\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.353949 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89266d5c-bcba-413a-9ab0-0901d25cd528-var-lock\") pod \"installer-9-crc\" (UID: \"89266d5c-bcba-413a-9ab0-0901d25cd528\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.354107 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89266d5c-bcba-413a-9ab0-0901d25cd528-kubelet-dir\") pod \"installer-9-crc\" (UID: \"89266d5c-bcba-413a-9ab0-0901d25cd528\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.455649 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89266d5c-bcba-413a-9ab0-0901d25cd528-kubelet-dir\") pod \"installer-9-crc\" (UID: \"89266d5c-bcba-413a-9ab0-0901d25cd528\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.455733 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89266d5c-bcba-413a-9ab0-0901d25cd528-kube-api-access\") pod \"installer-9-crc\" (UID: \"89266d5c-bcba-413a-9ab0-0901d25cd528\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.455758 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89266d5c-bcba-413a-9ab0-0901d25cd528-var-lock\") pod \"installer-9-crc\" (UID: \"89266d5c-bcba-413a-9ab0-0901d25cd528\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.455850 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89266d5c-bcba-413a-9ab0-0901d25cd528-kubelet-dir\") pod \"installer-9-crc\" (UID: \"89266d5c-bcba-413a-9ab0-0901d25cd528\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.455907 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89266d5c-bcba-413a-9ab0-0901d25cd528-var-lock\") pod \"installer-9-crc\" (UID: \"89266d5c-bcba-413a-9ab0-0901d25cd528\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.479870 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89266d5c-bcba-413a-9ab0-0901d25cd528-kube-api-access\") pod \"installer-9-crc\" (UID: \"89266d5c-bcba-413a-9ab0-0901d25cd528\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.604368 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.632172 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.707049 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.962568 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" event={"ID":"9e67470a-b3fe-4176-b546-fdf28012fce5","Type":"ContainerStarted","Data":"aa6c598d5a9336fa3da7e031d14510062fa6dc61ffc9248507f6f2bcb7cfcd16"} Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.962934 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" event={"ID":"9e67470a-b3fe-4176-b546-fdf28012fce5","Type":"ContainerStarted","Data":"616008c3a695ee8b3d5e1a0836e57c67a6f6cb2c2bed5dd6c27f754fbc7766a0"} Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.962947 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bcqz5" event={"ID":"9e67470a-b3fe-4176-b546-fdf28012fce5","Type":"ContainerStarted","Data":"f1fa93dd6515d10089bc54654b3b13a2f9035047091dcf75bf6fafd397d1365b"} Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.967055 4933 generic.go:334] "Generic (PLEG): container finished" podID="4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" containerID="4c90c3d2013eed7ac2e5fdc17e6dc23f221acf3e48873c43d72d266c44057ffc" exitCode=0 Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.967125 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4cjr" event={"ID":"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c","Type":"ContainerDied","Data":"4c90c3d2013eed7ac2e5fdc17e6dc23f221acf3e48873c43d72d266c44057ffc"} Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.977811 4933 generic.go:334] "Generic (PLEG): container finished" podID="e57325d2-83bc-484f-a95a-548b55435acd" containerID="12f4fd5e829ef86eeb557966786c2ee8ad13cf21eec23f507f581595c4010367" exitCode=0 Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.977885 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4gt9q" event={"ID":"e57325d2-83bc-484f-a95a-548b55435acd","Type":"ContainerDied","Data":"12f4fd5e829ef86eeb557966786c2ee8ad13cf21eec23f507f581595c4010367"} Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.981993 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7c2108a8-ede0-4da7-b73d-82838aac033b","Type":"ContainerStarted","Data":"f594e30c7b7976504bb89941b0f658b8d8c7e9fbfb160962b4f7089bef1f6497"} Dec 01 09:34:57 crc kubenswrapper[4933]: I1201 09:34:57.987402 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bcqz5" podStartSLOduration=169.987378597 podStartE2EDuration="2m49.987378597s" podCreationTimestamp="2025-12-01 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:57.983226124 +0000 UTC m=+188.624949739" watchObservedRunningTime="2025-12-01 09:34:57.987378597 +0000 UTC m=+188.629102212" Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.991178 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pjpdf" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.991184 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-b8gph" podUID="c1f2e651-74da-4f9c-9294-c2d45830b676" Dec 01 09:34:57 crc kubenswrapper[4933]: E1201 09:34:57.991211 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xpldw" podUID="1527ab8a-0674-4959-872e-bb759c7657e1" Dec 01 09:34:58 crc kubenswrapper[4933]: I1201 09:34:58.091112 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 09:34:58 crc kubenswrapper[4933]: I1201 09:34:58.989841 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4cjr" event={"ID":"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c","Type":"ContainerStarted","Data":"8858330236cd0618fcfb1323bd89a970022568c1517cb370328373302822e5ff"} Dec 01 09:34:58 crc kubenswrapper[4933]: I1201 09:34:58.991846 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4gt9q" event={"ID":"e57325d2-83bc-484f-a95a-548b55435acd","Type":"ContainerStarted","Data":"18a73b6f65c657b5ffb0b3e120b8fff8b0aef37d85a7abc5c84c37f5c3888724"} Dec 01 09:34:58 crc kubenswrapper[4933]: I1201 09:34:58.993667 4933 generic.go:334] "Generic (PLEG): container finished" podID="7c2108a8-ede0-4da7-b73d-82838aac033b" containerID="2adef9873f97aee0bfcece01589143747538ee5fe7f78a58881f009af9029f56" exitCode=0 Dec 01 09:34:58 crc kubenswrapper[4933]: I1201 09:34:58.993734 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7c2108a8-ede0-4da7-b73d-82838aac033b","Type":"ContainerDied","Data":"2adef9873f97aee0bfcece01589143747538ee5fe7f78a58881f009af9029f56"} Dec 01 09:34:58 crc kubenswrapper[4933]: I1201 09:34:58.995133 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"89266d5c-bcba-413a-9ab0-0901d25cd528","Type":"ContainerStarted","Data":"fb3560c3c6dfb3d649eec5c0409e20dedce6d9ca9c94e6e4e24fec340e96fe76"} Dec 01 09:34:58 crc kubenswrapper[4933]: I1201 09:34:58.995158 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"89266d5c-bcba-413a-9ab0-0901d25cd528","Type":"ContainerStarted","Data":"f7747b2a40c71bc17968b2cd641df1bfcd8c41b52ab50fd6fe342fed405a4b01"} Dec 01 09:34:59 crc kubenswrapper[4933]: I1201 09:34:59.044250 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j4cjr" podStartSLOduration=4.944648894 podStartE2EDuration="44.044228392s" podCreationTimestamp="2025-12-01 09:34:15 +0000 UTC" firstStartedPulling="2025-12-01 09:34:19.369169567 +0000 UTC m=+150.010893182" lastFinishedPulling="2025-12-01 09:34:58.468749065 +0000 UTC m=+189.110472680" observedRunningTime="2025-12-01 09:34:59.015434319 +0000 UTC m=+189.657157954" watchObservedRunningTime="2025-12-01 09:34:59.044228392 +0000 UTC m=+189.685952027" Dec 01 09:34:59 crc kubenswrapper[4933]: I1201 09:34:59.044815 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4gt9q" podStartSLOduration=3.6915952560000003 podStartE2EDuration="46.044808596s" podCreationTimestamp="2025-12-01 09:34:13 +0000 UTC" firstStartedPulling="2025-12-01 09:34:16.23653683 +0000 UTC m=+146.878260445" lastFinishedPulling="2025-12-01 09:34:58.58975017 +0000 UTC m=+189.231473785" observedRunningTime="2025-12-01 09:34:59.040803167 +0000 UTC m=+189.682526802" watchObservedRunningTime="2025-12-01 09:34:59.044808596 +0000 UTC m=+189.686532221" Dec 01 09:34:59 crc kubenswrapper[4933]: I1201 09:34:59.078832 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.078812518 podStartE2EDuration="2.078812518s" podCreationTimestamp="2025-12-01 09:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:34:59.078392138 +0000 UTC m=+189.720115773" watchObservedRunningTime="2025-12-01 09:34:59.078812518 +0000 UTC m=+189.720536133" Dec 01 09:35:00 crc kubenswrapper[4933]: I1201 09:35:00.332681 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 09:35:00 crc kubenswrapper[4933]: I1201 09:35:00.500294 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c2108a8-ede0-4da7-b73d-82838aac033b-kubelet-dir\") pod \"7c2108a8-ede0-4da7-b73d-82838aac033b\" (UID: \"7c2108a8-ede0-4da7-b73d-82838aac033b\") " Dec 01 09:35:00 crc kubenswrapper[4933]: I1201 09:35:00.500359 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c2108a8-ede0-4da7-b73d-82838aac033b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7c2108a8-ede0-4da7-b73d-82838aac033b" (UID: "7c2108a8-ede0-4da7-b73d-82838aac033b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:35:00 crc kubenswrapper[4933]: I1201 09:35:00.500421 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c2108a8-ede0-4da7-b73d-82838aac033b-kube-api-access\") pod \"7c2108a8-ede0-4da7-b73d-82838aac033b\" (UID: \"7c2108a8-ede0-4da7-b73d-82838aac033b\") " Dec 01 09:35:00 crc kubenswrapper[4933]: I1201 09:35:00.500709 4933 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c2108a8-ede0-4da7-b73d-82838aac033b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:00 crc kubenswrapper[4933]: I1201 09:35:00.509502 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c2108a8-ede0-4da7-b73d-82838aac033b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7c2108a8-ede0-4da7-b73d-82838aac033b" (UID: "7c2108a8-ede0-4da7-b73d-82838aac033b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:35:00 crc kubenswrapper[4933]: I1201 09:35:00.602171 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c2108a8-ede0-4da7-b73d-82838aac033b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:01 crc kubenswrapper[4933]: I1201 09:35:01.010537 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7c2108a8-ede0-4da7-b73d-82838aac033b","Type":"ContainerDied","Data":"f594e30c7b7976504bb89941b0f658b8d8c7e9fbfb160962b4f7089bef1f6497"} Dec 01 09:35:01 crc kubenswrapper[4933]: I1201 09:35:01.010592 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f594e30c7b7976504bb89941b0f658b8d8c7e9fbfb160962b4f7089bef1f6497" Dec 01 09:35:01 crc kubenswrapper[4933]: I1201 09:35:01.010597 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 09:35:04 crc kubenswrapper[4933]: I1201 09:35:04.281514 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:35:04 crc kubenswrapper[4933]: I1201 09:35:04.282175 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:35:04 crc kubenswrapper[4933]: I1201 09:35:04.853071 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:35:05 crc kubenswrapper[4933]: I1201 09:35:05.122446 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:35:05 crc kubenswrapper[4933]: I1201 09:35:05.217848 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4gt9q"] Dec 01 09:35:06 crc kubenswrapper[4933]: I1201 09:35:06.274888 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:35:06 crc kubenswrapper[4933]: I1201 09:35:06.274956 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:35:06 crc kubenswrapper[4933]: I1201 09:35:06.324589 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:35:07 crc kubenswrapper[4933]: I1201 09:35:07.062457 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4gt9q" podUID="e57325d2-83bc-484f-a95a-548b55435acd" containerName="registry-server" containerID="cri-o://18a73b6f65c657b5ffb0b3e120b8fff8b0aef37d85a7abc5c84c37f5c3888724" gracePeriod=2 Dec 01 09:35:07 crc kubenswrapper[4933]: I1201 09:35:07.110782 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:35:07 crc kubenswrapper[4933]: I1201 09:35:07.617927 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4cjr"] Dec 01 09:35:07 crc kubenswrapper[4933]: I1201 09:35:07.945425 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.069727 4933 generic.go:334] "Generic (PLEG): container finished" podID="e57325d2-83bc-484f-a95a-548b55435acd" containerID="18a73b6f65c657b5ffb0b3e120b8fff8b0aef37d85a7abc5c84c37f5c3888724" exitCode=0 Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.069810 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4gt9q" event={"ID":"e57325d2-83bc-484f-a95a-548b55435acd","Type":"ContainerDied","Data":"18a73b6f65c657b5ffb0b3e120b8fff8b0aef37d85a7abc5c84c37f5c3888724"} Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.069918 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4gt9q" event={"ID":"e57325d2-83bc-484f-a95a-548b55435acd","Type":"ContainerDied","Data":"50cf9783da257c004354ac1cab01a5d780472e07804748dea47c1ed3b2a76aad"} Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.069952 4933 scope.go:117] "RemoveContainer" containerID="18a73b6f65c657b5ffb0b3e120b8fff8b0aef37d85a7abc5c84c37f5c3888724" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.070425 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4gt9q" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.088427 4933 scope.go:117] "RemoveContainer" containerID="12f4fd5e829ef86eeb557966786c2ee8ad13cf21eec23f507f581595c4010367" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.109069 4933 scope.go:117] "RemoveContainer" containerID="ad0fc573890e5d239c1fdb34221ecd4bb6afc1baf239f946a96f7c04e67574b1" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.110939 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57325d2-83bc-484f-a95a-548b55435acd-catalog-content\") pod \"e57325d2-83bc-484f-a95a-548b55435acd\" (UID: \"e57325d2-83bc-484f-a95a-548b55435acd\") " Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.111050 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57325d2-83bc-484f-a95a-548b55435acd-utilities\") pod \"e57325d2-83bc-484f-a95a-548b55435acd\" (UID: \"e57325d2-83bc-484f-a95a-548b55435acd\") " Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.111083 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h94p7\" (UniqueName: \"kubernetes.io/projected/e57325d2-83bc-484f-a95a-548b55435acd-kube-api-access-h94p7\") pod \"e57325d2-83bc-484f-a95a-548b55435acd\" (UID: \"e57325d2-83bc-484f-a95a-548b55435acd\") " Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.112146 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e57325d2-83bc-484f-a95a-548b55435acd-utilities" (OuterVolumeSpecName: "utilities") pod "e57325d2-83bc-484f-a95a-548b55435acd" (UID: "e57325d2-83bc-484f-a95a-548b55435acd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.118738 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57325d2-83bc-484f-a95a-548b55435acd-kube-api-access-h94p7" (OuterVolumeSpecName: "kube-api-access-h94p7") pod "e57325d2-83bc-484f-a95a-548b55435acd" (UID: "e57325d2-83bc-484f-a95a-548b55435acd"). InnerVolumeSpecName "kube-api-access-h94p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.151024 4933 scope.go:117] "RemoveContainer" containerID="18a73b6f65c657b5ffb0b3e120b8fff8b0aef37d85a7abc5c84c37f5c3888724" Dec 01 09:35:08 crc kubenswrapper[4933]: E1201 09:35:08.151925 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18a73b6f65c657b5ffb0b3e120b8fff8b0aef37d85a7abc5c84c37f5c3888724\": container with ID starting with 18a73b6f65c657b5ffb0b3e120b8fff8b0aef37d85a7abc5c84c37f5c3888724 not found: ID does not exist" containerID="18a73b6f65c657b5ffb0b3e120b8fff8b0aef37d85a7abc5c84c37f5c3888724" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.151993 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18a73b6f65c657b5ffb0b3e120b8fff8b0aef37d85a7abc5c84c37f5c3888724"} err="failed to get container status \"18a73b6f65c657b5ffb0b3e120b8fff8b0aef37d85a7abc5c84c37f5c3888724\": rpc error: code = NotFound desc = could not find container \"18a73b6f65c657b5ffb0b3e120b8fff8b0aef37d85a7abc5c84c37f5c3888724\": container with ID starting with 18a73b6f65c657b5ffb0b3e120b8fff8b0aef37d85a7abc5c84c37f5c3888724 not found: ID does not exist" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.152071 4933 scope.go:117] "RemoveContainer" containerID="12f4fd5e829ef86eeb557966786c2ee8ad13cf21eec23f507f581595c4010367" Dec 01 09:35:08 crc kubenswrapper[4933]: E1201 09:35:08.152732 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f4fd5e829ef86eeb557966786c2ee8ad13cf21eec23f507f581595c4010367\": container with ID starting with 12f4fd5e829ef86eeb557966786c2ee8ad13cf21eec23f507f581595c4010367 not found: ID does not exist" containerID="12f4fd5e829ef86eeb557966786c2ee8ad13cf21eec23f507f581595c4010367" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.152806 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f4fd5e829ef86eeb557966786c2ee8ad13cf21eec23f507f581595c4010367"} err="failed to get container status \"12f4fd5e829ef86eeb557966786c2ee8ad13cf21eec23f507f581595c4010367\": rpc error: code = NotFound desc = could not find container \"12f4fd5e829ef86eeb557966786c2ee8ad13cf21eec23f507f581595c4010367\": container with ID starting with 12f4fd5e829ef86eeb557966786c2ee8ad13cf21eec23f507f581595c4010367 not found: ID does not exist" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.152862 4933 scope.go:117] "RemoveContainer" containerID="ad0fc573890e5d239c1fdb34221ecd4bb6afc1baf239f946a96f7c04e67574b1" Dec 01 09:35:08 crc kubenswrapper[4933]: E1201 09:35:08.153375 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0fc573890e5d239c1fdb34221ecd4bb6afc1baf239f946a96f7c04e67574b1\": container with ID starting with ad0fc573890e5d239c1fdb34221ecd4bb6afc1baf239f946a96f7c04e67574b1 not found: ID does not exist" containerID="ad0fc573890e5d239c1fdb34221ecd4bb6afc1baf239f946a96f7c04e67574b1" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.153495 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0fc573890e5d239c1fdb34221ecd4bb6afc1baf239f946a96f7c04e67574b1"} err="failed to get container status \"ad0fc573890e5d239c1fdb34221ecd4bb6afc1baf239f946a96f7c04e67574b1\": rpc error: code = NotFound desc = could not find container \"ad0fc573890e5d239c1fdb34221ecd4bb6afc1baf239f946a96f7c04e67574b1\": container with ID starting with ad0fc573890e5d239c1fdb34221ecd4bb6afc1baf239f946a96f7c04e67574b1 not found: ID does not exist" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.212992 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57325d2-83bc-484f-a95a-548b55435acd-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.213035 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h94p7\" (UniqueName: \"kubernetes.io/projected/e57325d2-83bc-484f-a95a-548b55435acd-kube-api-access-h94p7\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.398883 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e57325d2-83bc-484f-a95a-548b55435acd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e57325d2-83bc-484f-a95a-548b55435acd" (UID: "e57325d2-83bc-484f-a95a-548b55435acd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.415159 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57325d2-83bc-484f-a95a-548b55435acd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.708687 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4gt9q"] Dec 01 09:35:08 crc kubenswrapper[4933]: I1201 09:35:08.717097 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4gt9q"] Dec 01 09:35:09 crc kubenswrapper[4933]: I1201 09:35:09.076532 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j4cjr" podUID="4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" containerName="registry-server" containerID="cri-o://8858330236cd0618fcfb1323bd89a970022568c1517cb370328373302822e5ff" gracePeriod=2 Dec 01 09:35:09 crc kubenswrapper[4933]: I1201 09:35:09.678396 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e57325d2-83bc-484f-a95a-548b55435acd" path="/var/lib/kubelet/pods/e57325d2-83bc-484f-a95a-548b55435acd/volumes" Dec 01 09:35:09 crc kubenswrapper[4933]: I1201 09:35:09.965617 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.038008 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-catalog-content\") pod \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\" (UID: \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\") " Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.038194 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-utilities\") pod \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\" (UID: \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\") " Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.038255 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq9fr\" (UniqueName: \"kubernetes.io/projected/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-kube-api-access-zq9fr\") pod \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\" (UID: \"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c\") " Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.039409 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-utilities" (OuterVolumeSpecName: "utilities") pod "4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" (UID: "4bb6d65b-3b0f-4d8f-83be-242ed7b0807c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.046250 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-kube-api-access-zq9fr" (OuterVolumeSpecName: "kube-api-access-zq9fr") pod "4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" (UID: "4bb6d65b-3b0f-4d8f-83be-242ed7b0807c"). InnerVolumeSpecName "kube-api-access-zq9fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.063881 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" (UID: "4bb6d65b-3b0f-4d8f-83be-242ed7b0807c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.087403 4933 generic.go:334] "Generic (PLEG): container finished" podID="4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" containerID="8858330236cd0618fcfb1323bd89a970022568c1517cb370328373302822e5ff" exitCode=0 Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.087683 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4cjr" event={"ID":"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c","Type":"ContainerDied","Data":"8858330236cd0618fcfb1323bd89a970022568c1517cb370328373302822e5ff"} Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.087806 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4cjr" event={"ID":"4bb6d65b-3b0f-4d8f-83be-242ed7b0807c","Type":"ContainerDied","Data":"5278b130fac54ba64d9add799dbbbc3be3407630474f7242566b39612c37fede"} Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.087753 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4cjr" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.087863 4933 scope.go:117] "RemoveContainer" containerID="8858330236cd0618fcfb1323bd89a970022568c1517cb370328373302822e5ff" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.091406 4933 generic.go:334] "Generic (PLEG): container finished" podID="194f9dd3-85db-4303-ad0e-180d0e160da0" containerID="8cfa5361032f1b7da47612bd389b2f08fb0b2da04cfc7e9437b29b2ced6e55f1" exitCode=0 Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.091468 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5tk2j" event={"ID":"194f9dd3-85db-4303-ad0e-180d0e160da0","Type":"ContainerDied","Data":"8cfa5361032f1b7da47612bd389b2f08fb0b2da04cfc7e9437b29b2ced6e55f1"} Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.107346 4933 scope.go:117] "RemoveContainer" containerID="4c90c3d2013eed7ac2e5fdc17e6dc23f221acf3e48873c43d72d266c44057ffc" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.135891 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4cjr"] Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.140455 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.140502 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.140519 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq9fr\" (UniqueName: \"kubernetes.io/projected/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c-kube-api-access-zq9fr\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.142631 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4cjr"] Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.143386 4933 scope.go:117] "RemoveContainer" containerID="be933f3c49a4be32c90954fd9fdf4c6b6ddb99741c9bcebec10839ce27a0a640" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.160832 4933 scope.go:117] "RemoveContainer" containerID="8858330236cd0618fcfb1323bd89a970022568c1517cb370328373302822e5ff" Dec 01 09:35:10 crc kubenswrapper[4933]: E1201 09:35:10.161425 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8858330236cd0618fcfb1323bd89a970022568c1517cb370328373302822e5ff\": container with ID starting with 8858330236cd0618fcfb1323bd89a970022568c1517cb370328373302822e5ff not found: ID does not exist" containerID="8858330236cd0618fcfb1323bd89a970022568c1517cb370328373302822e5ff" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.161494 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8858330236cd0618fcfb1323bd89a970022568c1517cb370328373302822e5ff"} err="failed to get container status \"8858330236cd0618fcfb1323bd89a970022568c1517cb370328373302822e5ff\": rpc error: code = NotFound desc = could not find container \"8858330236cd0618fcfb1323bd89a970022568c1517cb370328373302822e5ff\": container with ID starting with 8858330236cd0618fcfb1323bd89a970022568c1517cb370328373302822e5ff not found: ID does not exist" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.161542 4933 scope.go:117] "RemoveContainer" containerID="4c90c3d2013eed7ac2e5fdc17e6dc23f221acf3e48873c43d72d266c44057ffc" Dec 01 09:35:10 crc kubenswrapper[4933]: E1201 09:35:10.162161 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c90c3d2013eed7ac2e5fdc17e6dc23f221acf3e48873c43d72d266c44057ffc\": container with ID starting with 4c90c3d2013eed7ac2e5fdc17e6dc23f221acf3e48873c43d72d266c44057ffc not found: ID does not exist" containerID="4c90c3d2013eed7ac2e5fdc17e6dc23f221acf3e48873c43d72d266c44057ffc" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.162211 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c90c3d2013eed7ac2e5fdc17e6dc23f221acf3e48873c43d72d266c44057ffc"} err="failed to get container status \"4c90c3d2013eed7ac2e5fdc17e6dc23f221acf3e48873c43d72d266c44057ffc\": rpc error: code = NotFound desc = could not find container \"4c90c3d2013eed7ac2e5fdc17e6dc23f221acf3e48873c43d72d266c44057ffc\": container with ID starting with 4c90c3d2013eed7ac2e5fdc17e6dc23f221acf3e48873c43d72d266c44057ffc not found: ID does not exist" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.162251 4933 scope.go:117] "RemoveContainer" containerID="be933f3c49a4be32c90954fd9fdf4c6b6ddb99741c9bcebec10839ce27a0a640" Dec 01 09:35:10 crc kubenswrapper[4933]: E1201 09:35:10.162666 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be933f3c49a4be32c90954fd9fdf4c6b6ddb99741c9bcebec10839ce27a0a640\": container with ID starting with be933f3c49a4be32c90954fd9fdf4c6b6ddb99741c9bcebec10839ce27a0a640 not found: ID does not exist" containerID="be933f3c49a4be32c90954fd9fdf4c6b6ddb99741c9bcebec10839ce27a0a640" Dec 01 09:35:10 crc kubenswrapper[4933]: I1201 09:35:10.162739 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be933f3c49a4be32c90954fd9fdf4c6b6ddb99741c9bcebec10839ce27a0a640"} err="failed to get container status \"be933f3c49a4be32c90954fd9fdf4c6b6ddb99741c9bcebec10839ce27a0a640\": rpc error: code = NotFound desc = could not find container \"be933f3c49a4be32c90954fd9fdf4c6b6ddb99741c9bcebec10839ce27a0a640\": container with ID starting with be933f3c49a4be32c90954fd9fdf4c6b6ddb99741c9bcebec10839ce27a0a640 not found: ID does not exist" Dec 01 09:35:11 crc kubenswrapper[4933]: I1201 09:35:11.101811 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpldw" event={"ID":"1527ab8a-0674-4959-872e-bb759c7657e1","Type":"ContainerStarted","Data":"6ca74c85d9f13c3e95244b6395305f4a5e97db7d5efcbaf760db1111a67a4393"} Dec 01 09:35:11 crc kubenswrapper[4933]: I1201 09:35:11.678161 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" path="/var/lib/kubelet/pods/4bb6d65b-3b0f-4d8f-83be-242ed7b0807c/volumes" Dec 01 09:35:11 crc kubenswrapper[4933]: I1201 09:35:11.741086 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:35:11 crc kubenswrapper[4933]: I1201 09:35:11.741170 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:35:11 crc kubenswrapper[4933]: I1201 09:35:11.741229 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:35:11 crc kubenswrapper[4933]: I1201 09:35:11.741891 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:35:11 crc kubenswrapper[4933]: I1201 09:35:11.741972 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25" gracePeriod=600 Dec 01 09:35:12 crc kubenswrapper[4933]: I1201 09:35:12.113398 4933 generic.go:334] "Generic (PLEG): container finished" podID="1527ab8a-0674-4959-872e-bb759c7657e1" containerID="6ca74c85d9f13c3e95244b6395305f4a5e97db7d5efcbaf760db1111a67a4393" exitCode=0 Dec 01 09:35:12 crc kubenswrapper[4933]: I1201 09:35:12.113472 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpldw" event={"ID":"1527ab8a-0674-4959-872e-bb759c7657e1","Type":"ContainerDied","Data":"6ca74c85d9f13c3e95244b6395305f4a5e97db7d5efcbaf760db1111a67a4393"} Dec 01 09:35:13 crc kubenswrapper[4933]: I1201 09:35:13.120419 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5tk2j" event={"ID":"194f9dd3-85db-4303-ad0e-180d0e160da0","Type":"ContainerStarted","Data":"13043f0c841f4858206d8bb94cbb585ac1d5ef962e31458340d3e3e186f4562c"} Dec 01 09:35:13 crc kubenswrapper[4933]: I1201 09:35:13.122845 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25" exitCode=0 Dec 01 09:35:13 crc kubenswrapper[4933]: I1201 09:35:13.122874 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25"} Dec 01 09:35:13 crc kubenswrapper[4933]: I1201 09:35:13.142180 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5tk2j" podStartSLOduration=3.9374839059999998 podStartE2EDuration="1m0.142163178s" podCreationTimestamp="2025-12-01 09:34:13 +0000 UTC" firstStartedPulling="2025-12-01 09:34:15.21264011 +0000 UTC m=+145.854363715" lastFinishedPulling="2025-12-01 09:35:11.417319372 +0000 UTC m=+202.059042987" observedRunningTime="2025-12-01 09:35:13.140517208 +0000 UTC m=+203.782240833" watchObservedRunningTime="2025-12-01 09:35:13.142163178 +0000 UTC m=+203.783886793" Dec 01 09:35:13 crc kubenswrapper[4933]: I1201 09:35:13.918124 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:35:13 crc kubenswrapper[4933]: I1201 09:35:13.918177 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:35:13 crc kubenswrapper[4933]: I1201 09:35:13.966849 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:35:15 crc kubenswrapper[4933]: I1201 09:35:15.136008 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"530143e76725bdde118c91d28d335ba05105652665eaa497ed10370fd16dac0b"} Dec 01 09:35:16 crc kubenswrapper[4933]: I1201 09:35:16.146781 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjpdf" event={"ID":"447e5be4-e974-45c7-a58a-2efddd4bd49c","Type":"ContainerStarted","Data":"31eb8744d5282dc0122a6c4b8b086dd7b462ac11867fe060cd7b4f43244864d4"} Dec 01 09:35:16 crc kubenswrapper[4933]: I1201 09:35:16.148843 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8gph" event={"ID":"c1f2e651-74da-4f9c-9294-c2d45830b676","Type":"ContainerStarted","Data":"2409398b81c2f1726c860d3e9c135a83a97b13ac18c93da758cd6d3bc3df75c2"} Dec 01 09:35:16 crc kubenswrapper[4933]: I1201 09:35:16.150656 4933 generic.go:334] "Generic (PLEG): container finished" podID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" containerID="02dd55a589fcc88f8ba5fcb71ad03d2aa2766238863c0b11e822abe58e90b356" exitCode=0 Dec 01 09:35:16 crc kubenswrapper[4933]: I1201 09:35:16.150853 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngb5q" event={"ID":"a08a8024-ebc2-4e05-a6a0-ebc22bed8658","Type":"ContainerDied","Data":"02dd55a589fcc88f8ba5fcb71ad03d2aa2766238863c0b11e822abe58e90b356"} Dec 01 09:35:17 crc kubenswrapper[4933]: I1201 09:35:17.158939 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngb5q" event={"ID":"a08a8024-ebc2-4e05-a6a0-ebc22bed8658","Type":"ContainerStarted","Data":"7a55f0b45fc14128779cf55d2704208252d808b5ca2211f84c1bd95f7f05e565"} Dec 01 09:35:17 crc kubenswrapper[4933]: I1201 09:35:17.161097 4933 generic.go:334] "Generic (PLEG): container finished" podID="447e5be4-e974-45c7-a58a-2efddd4bd49c" containerID="31eb8744d5282dc0122a6c4b8b086dd7b462ac11867fe060cd7b4f43244864d4" exitCode=0 Dec 01 09:35:17 crc kubenswrapper[4933]: I1201 09:35:17.161160 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjpdf" event={"ID":"447e5be4-e974-45c7-a58a-2efddd4bd49c","Type":"ContainerDied","Data":"31eb8744d5282dc0122a6c4b8b086dd7b462ac11867fe060cd7b4f43244864d4"} Dec 01 09:35:17 crc kubenswrapper[4933]: I1201 09:35:17.164415 4933 generic.go:334] "Generic (PLEG): container finished" podID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" containerID="fd216f784c8281309bc3581b4b956d1a6b4f5be0a13c449ef0eb24b037bed0a0" exitCode=0 Dec 01 09:35:17 crc kubenswrapper[4933]: I1201 09:35:17.164515 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlnld" event={"ID":"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b","Type":"ContainerDied","Data":"fd216f784c8281309bc3581b4b956d1a6b4f5be0a13c449ef0eb24b037bed0a0"} Dec 01 09:35:17 crc kubenswrapper[4933]: I1201 09:35:17.167325 4933 generic.go:334] "Generic (PLEG): container finished" podID="c1f2e651-74da-4f9c-9294-c2d45830b676" containerID="2409398b81c2f1726c860d3e9c135a83a97b13ac18c93da758cd6d3bc3df75c2" exitCode=0 Dec 01 09:35:17 crc kubenswrapper[4933]: I1201 09:35:17.167394 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8gph" event={"ID":"c1f2e651-74da-4f9c-9294-c2d45830b676","Type":"ContainerDied","Data":"2409398b81c2f1726c860d3e9c135a83a97b13ac18c93da758cd6d3bc3df75c2"} Dec 01 09:35:17 crc kubenswrapper[4933]: I1201 09:35:17.172297 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpldw" event={"ID":"1527ab8a-0674-4959-872e-bb759c7657e1","Type":"ContainerStarted","Data":"a1ce571fbd27752a6d8b6100f0ca85aa4401b2925ec47bedd6a55659b6baa647"} Dec 01 09:35:17 crc kubenswrapper[4933]: I1201 09:35:17.186119 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ngb5q" podStartSLOduration=4.961491803 podStartE2EDuration="1m2.186097036s" podCreationTimestamp="2025-12-01 09:34:15 +0000 UTC" firstStartedPulling="2025-12-01 09:34:19.412632982 +0000 UTC m=+150.054356597" lastFinishedPulling="2025-12-01 09:35:16.637238215 +0000 UTC m=+207.278961830" observedRunningTime="2025-12-01 09:35:17.184399404 +0000 UTC m=+207.826123019" watchObservedRunningTime="2025-12-01 09:35:17.186097036 +0000 UTC m=+207.827820661" Dec 01 09:35:17 crc kubenswrapper[4933]: I1201 09:35:17.268899 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xpldw" podStartSLOduration=3.504559071 podStartE2EDuration="1m3.268883688s" podCreationTimestamp="2025-12-01 09:34:14 +0000 UTC" firstStartedPulling="2025-12-01 09:34:16.227407774 +0000 UTC m=+146.869131379" lastFinishedPulling="2025-12-01 09:35:15.991732391 +0000 UTC m=+206.633455996" observedRunningTime="2025-12-01 09:35:17.267502664 +0000 UTC m=+207.909226309" watchObservedRunningTime="2025-12-01 09:35:17.268883688 +0000 UTC m=+207.910607303" Dec 01 09:35:18 crc kubenswrapper[4933]: I1201 09:35:18.181149 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlnld" event={"ID":"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b","Type":"ContainerStarted","Data":"a06650e0c6400dee16635ef2ef4d9951fe8df256bb514cab4ec7ab5c8fbdd08f"} Dec 01 09:35:18 crc kubenswrapper[4933]: I1201 09:35:18.183863 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8gph" event={"ID":"c1f2e651-74da-4f9c-9294-c2d45830b676","Type":"ContainerStarted","Data":"a52375e49236b7456d0bd44a3f2c4c74f30915d50e0bb5fb5a13cea383ce4c5c"} Dec 01 09:35:18 crc kubenswrapper[4933]: I1201 09:35:18.186148 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjpdf" event={"ID":"447e5be4-e974-45c7-a58a-2efddd4bd49c","Type":"ContainerStarted","Data":"3b27e14d234acf537deda29ad786b4a1bbb4ae18bd7442d0e64e56bec38b4960"} Dec 01 09:35:18 crc kubenswrapper[4933]: I1201 09:35:18.208575 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jlnld" podStartSLOduration=3.618422594 podStartE2EDuration="1m5.208548172s" podCreationTimestamp="2025-12-01 09:34:13 +0000 UTC" firstStartedPulling="2025-12-01 09:34:16.242333053 +0000 UTC m=+146.884056668" lastFinishedPulling="2025-12-01 09:35:17.832458631 +0000 UTC m=+208.474182246" observedRunningTime="2025-12-01 09:35:18.206859451 +0000 UTC m=+208.848583066" watchObservedRunningTime="2025-12-01 09:35:18.208548172 +0000 UTC m=+208.850271787" Dec 01 09:35:18 crc kubenswrapper[4933]: I1201 09:35:18.225870 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pjpdf" podStartSLOduration=2.921750721 podStartE2EDuration="1m1.225843426s" podCreationTimestamp="2025-12-01 09:34:17 +0000 UTC" firstStartedPulling="2025-12-01 09:34:19.429555821 +0000 UTC m=+150.071279436" lastFinishedPulling="2025-12-01 09:35:17.733648526 +0000 UTC m=+208.375372141" observedRunningTime="2025-12-01 09:35:18.223743785 +0000 UTC m=+208.865467390" watchObservedRunningTime="2025-12-01 09:35:18.225843426 +0000 UTC m=+208.867567041" Dec 01 09:35:18 crc kubenswrapper[4933]: I1201 09:35:18.246158 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b8gph" podStartSLOduration=4.004429229 podStartE2EDuration="1m2.246135045s" podCreationTimestamp="2025-12-01 09:34:16 +0000 UTC" firstStartedPulling="2025-12-01 09:34:19.367948236 +0000 UTC m=+150.009671851" lastFinishedPulling="2025-12-01 09:35:17.609654052 +0000 UTC m=+208.251377667" observedRunningTime="2025-12-01 09:35:18.242237499 +0000 UTC m=+208.883961114" watchObservedRunningTime="2025-12-01 09:35:18.246135045 +0000 UTC m=+208.887858660" Dec 01 09:35:21 crc kubenswrapper[4933]: I1201 09:35:21.482769 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" podUID="b805d945-8eed-48d3-9547-560266e5dfb1" containerName="oauth-openshift" containerID="cri-o://e68bdc55386bd4e1f395356a06ccd2194d56192195e739e18da415599b166c27" gracePeriod=15 Dec 01 09:35:23 crc kubenswrapper[4933]: I1201 09:35:23.970830 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.103098 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.103170 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.143636 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.218598 4933 generic.go:334] "Generic (PLEG): container finished" podID="b805d945-8eed-48d3-9547-560266e5dfb1" containerID="e68bdc55386bd4e1f395356a06ccd2194d56192195e739e18da415599b166c27" exitCode=0 Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.218693 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" event={"ID":"b805d945-8eed-48d3-9547-560266e5dfb1","Type":"ContainerDied","Data":"e68bdc55386bd4e1f395356a06ccd2194d56192195e739e18da415599b166c27"} Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.263184 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.407079 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.441134 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-69b74fc85f-r9lm8"] Dec 01 09:35:24 crc kubenswrapper[4933]: E1201 09:35:24.441469 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b805d945-8eed-48d3-9547-560266e5dfb1" containerName="oauth-openshift" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.441488 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="b805d945-8eed-48d3-9547-560266e5dfb1" containerName="oauth-openshift" Dec 01 09:35:24 crc kubenswrapper[4933]: E1201 09:35:24.441501 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57325d2-83bc-484f-a95a-548b55435acd" containerName="extract-content" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.441510 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57325d2-83bc-484f-a95a-548b55435acd" containerName="extract-content" Dec 01 09:35:24 crc kubenswrapper[4933]: E1201 09:35:24.441525 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57325d2-83bc-484f-a95a-548b55435acd" containerName="registry-server" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.441536 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57325d2-83bc-484f-a95a-548b55435acd" containerName="registry-server" Dec 01 09:35:24 crc kubenswrapper[4933]: E1201 09:35:24.441550 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" containerName="extract-utilities" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.441559 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" containerName="extract-utilities" Dec 01 09:35:24 crc kubenswrapper[4933]: E1201 09:35:24.441573 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c2108a8-ede0-4da7-b73d-82838aac033b" containerName="pruner" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.441581 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c2108a8-ede0-4da7-b73d-82838aac033b" containerName="pruner" Dec 01 09:35:24 crc kubenswrapper[4933]: E1201 09:35:24.441591 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" containerName="extract-content" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.441601 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" containerName="extract-content" Dec 01 09:35:24 crc kubenswrapper[4933]: E1201 09:35:24.441611 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57325d2-83bc-484f-a95a-548b55435acd" containerName="extract-utilities" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.441619 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57325d2-83bc-484f-a95a-548b55435acd" containerName="extract-utilities" Dec 01 09:35:24 crc kubenswrapper[4933]: E1201 09:35:24.441633 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" containerName="registry-server" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.441644 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" containerName="registry-server" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.441769 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb6d65b-3b0f-4d8f-83be-242ed7b0807c" containerName="registry-server" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.441788 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="b805d945-8eed-48d3-9547-560266e5dfb1" containerName="oauth-openshift" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.441796 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57325d2-83bc-484f-a95a-548b55435acd" containerName="registry-server" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.441806 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c2108a8-ede0-4da7-b73d-82838aac033b" containerName="pruner" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.442325 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.479124 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69b74fc85f-r9lm8"] Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.511045 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh2bz\" (UniqueName: \"kubernetes.io/projected/b805d945-8eed-48d3-9547-560266e5dfb1-kube-api-access-wh2bz\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.511123 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-router-certs\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.511202 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-error\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.511272 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-login\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.511366 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-idp-0-file-data\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.511398 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-cliconfig\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.511425 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-trusted-ca-bundle\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.511467 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-audit-policies\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.511511 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b805d945-8eed-48d3-9547-560266e5dfb1-audit-dir\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.511546 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-serving-cert\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.511581 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-ocp-branding-template\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.511642 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-provider-selection\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.511684 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-service-ca\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.512614 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.512931 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-session\") pod \"b805d945-8eed-48d3-9547-560266e5dfb1\" (UID: \"b805d945-8eed-48d3-9547-560266e5dfb1\") " Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.513100 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b805d945-8eed-48d3-9547-560266e5dfb1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.513295 4933 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b805d945-8eed-48d3-9547-560266e5dfb1-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.514161 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.514582 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.514894 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.520580 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.521225 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.522075 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.522839 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.524061 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.524438 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.524693 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b805d945-8eed-48d3-9547-560266e5dfb1-kube-api-access-wh2bz" (OuterVolumeSpecName: "kube-api-access-wh2bz") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "kube-api-access-wh2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.524945 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.529342 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b805d945-8eed-48d3-9547-560266e5dfb1" (UID: "b805d945-8eed-48d3-9547-560266e5dfb1"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.611242 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.611601 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.614676 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.614788 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.614822 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-router-certs\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.614845 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.614869 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.615290 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-audit-policies\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.615464 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-session\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.615632 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzs4s\" (UniqueName: \"kubernetes.io/projected/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-kube-api-access-dzs4s\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.615713 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-service-ca\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.615790 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.615839 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-audit-dir\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.615868 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-user-template-login\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.615922 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-user-template-error\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.615964 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.616077 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.616099 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.616114 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.616128 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh2bz\" (UniqueName: \"kubernetes.io/projected/b805d945-8eed-48d3-9547-560266e5dfb1-kube-api-access-wh2bz\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.616142 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.616155 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.616168 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.616181 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.616193 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.616206 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.616222 4933 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b805d945-8eed-48d3-9547-560266e5dfb1-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.616236 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.616252 4933 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b805d945-8eed-48d3-9547-560266e5dfb1-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.655544 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717133 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzs4s\" (UniqueName: \"kubernetes.io/projected/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-kube-api-access-dzs4s\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717207 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-service-ca\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717246 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717289 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-audit-dir\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717332 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-user-template-login\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717356 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-user-template-error\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717384 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717423 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717451 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717474 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-router-certs\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717496 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717522 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717549 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-audit-policies\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.717581 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-session\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.718352 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-audit-dir\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.719707 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-audit-policies\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.719767 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-service-ca\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.719738 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.720141 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.721799 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.722105 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-user-template-error\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.722578 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-session\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.722896 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.723681 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.724024 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-router-certs\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.724546 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.724861 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-v4-0-config-user-template-login\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.736914 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzs4s\" (UniqueName: \"kubernetes.io/projected/2ce78d1f-aec7-4146-b441-3e1b1df1a33d-kube-api-access-dzs4s\") pod \"oauth-openshift-69b74fc85f-r9lm8\" (UID: \"2ce78d1f-aec7-4146-b441-3e1b1df1a33d\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:24 crc kubenswrapper[4933]: I1201 09:35:24.784850 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:25 crc kubenswrapper[4933]: I1201 09:35:25.183376 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69b74fc85f-r9lm8"] Dec 01 09:35:25 crc kubenswrapper[4933]: W1201 09:35:25.191332 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ce78d1f_aec7_4146_b441_3e1b1df1a33d.slice/crio-8e961fe148157b80ba5b739c7a31fcf2068f1f3146a7a86592775c44ace1b7ad WatchSource:0}: Error finding container 8e961fe148157b80ba5b739c7a31fcf2068f1f3146a7a86592775c44ace1b7ad: Status 404 returned error can't find the container with id 8e961fe148157b80ba5b739c7a31fcf2068f1f3146a7a86592775c44ace1b7ad Dec 01 09:35:25 crc kubenswrapper[4933]: I1201 09:35:25.226942 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" event={"ID":"2ce78d1f-aec7-4146-b441-3e1b1df1a33d","Type":"ContainerStarted","Data":"8e961fe148157b80ba5b739c7a31fcf2068f1f3146a7a86592775c44ace1b7ad"} Dec 01 09:35:25 crc kubenswrapper[4933]: I1201 09:35:25.229907 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" event={"ID":"b805d945-8eed-48d3-9547-560266e5dfb1","Type":"ContainerDied","Data":"ac9dffd47884a224173cbe8f39fd8e7655a02b0da4910f46d469b6ffe9130ecd"} Dec 01 09:35:25 crc kubenswrapper[4933]: I1201 09:35:25.229952 4933 scope.go:117] "RemoveContainer" containerID="e68bdc55386bd4e1f395356a06ccd2194d56192195e739e18da415599b166c27" Dec 01 09:35:25 crc kubenswrapper[4933]: I1201 09:35:25.230455 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4wqht" Dec 01 09:35:25 crc kubenswrapper[4933]: I1201 09:35:25.268514 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4wqht"] Dec 01 09:35:25 crc kubenswrapper[4933]: I1201 09:35:25.271450 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4wqht"] Dec 01 09:35:25 crc kubenswrapper[4933]: I1201 09:35:25.278531 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:35:25 crc kubenswrapper[4933]: I1201 09:35:25.676535 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b805d945-8eed-48d3-9547-560266e5dfb1" path="/var/lib/kubelet/pods/b805d945-8eed-48d3-9547-560266e5dfb1/volumes" Dec 01 09:35:25 crc kubenswrapper[4933]: I1201 09:35:25.889917 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:35:25 crc kubenswrapper[4933]: I1201 09:35:25.889988 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:35:25 crc kubenswrapper[4933]: I1201 09:35:25.983867 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:35:26 crc kubenswrapper[4933]: I1201 09:35:26.202658 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xpldw"] Dec 01 09:35:26 crc kubenswrapper[4933]: I1201 09:35:26.301347 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.246430 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.246810 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.249429 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" event={"ID":"2ce78d1f-aec7-4146-b441-3e1b1df1a33d","Type":"ContainerStarted","Data":"db5c5ec522f3ee2d0304b29eaa07a6218397b52654fc1e91aba61146c5c98978"} Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.249837 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xpldw" podUID="1527ab8a-0674-4959-872e-bb759c7657e1" containerName="registry-server" containerID="cri-o://a1ce571fbd27752a6d8b6100f0ca85aa4401b2925ec47bedd6a55659b6baa647" gracePeriod=2 Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.285248 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" podStartSLOduration=31.285212709 podStartE2EDuration="31.285212709s" podCreationTimestamp="2025-12-01 09:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:35:27.283426415 +0000 UTC m=+217.925150040" watchObservedRunningTime="2025-12-01 09:35:27.285212709 +0000 UTC m=+217.926936324" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.313414 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.663509 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.663687 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.669327 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.671197 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1527ab8a-0674-4959-872e-bb759c7657e1-utilities\") pod \"1527ab8a-0674-4959-872e-bb759c7657e1\" (UID: \"1527ab8a-0674-4959-872e-bb759c7657e1\") " Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.671320 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtlv2\" (UniqueName: \"kubernetes.io/projected/1527ab8a-0674-4959-872e-bb759c7657e1-kube-api-access-wtlv2\") pod \"1527ab8a-0674-4959-872e-bb759c7657e1\" (UID: \"1527ab8a-0674-4959-872e-bb759c7657e1\") " Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.671345 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1527ab8a-0674-4959-872e-bb759c7657e1-catalog-content\") pod \"1527ab8a-0674-4959-872e-bb759c7657e1\" (UID: \"1527ab8a-0674-4959-872e-bb759c7657e1\") " Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.673992 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1527ab8a-0674-4959-872e-bb759c7657e1-utilities" (OuterVolumeSpecName: "utilities") pod "1527ab8a-0674-4959-872e-bb759c7657e1" (UID: "1527ab8a-0674-4959-872e-bb759c7657e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.682846 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1527ab8a-0674-4959-872e-bb759c7657e1-kube-api-access-wtlv2" (OuterVolumeSpecName: "kube-api-access-wtlv2") pod "1527ab8a-0674-4959-872e-bb759c7657e1" (UID: "1527ab8a-0674-4959-872e-bb759c7657e1"). InnerVolumeSpecName "kube-api-access-wtlv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.720714 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.735615 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1527ab8a-0674-4959-872e-bb759c7657e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1527ab8a-0674-4959-872e-bb759c7657e1" (UID: "1527ab8a-0674-4959-872e-bb759c7657e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.772543 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtlv2\" (UniqueName: \"kubernetes.io/projected/1527ab8a-0674-4959-872e-bb759c7657e1-kube-api-access-wtlv2\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.772583 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1527ab8a-0674-4959-872e-bb759c7657e1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:27 crc kubenswrapper[4933]: I1201 09:35:27.772599 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1527ab8a-0674-4959-872e-bb759c7657e1-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.257929 4933 generic.go:334] "Generic (PLEG): container finished" podID="1527ab8a-0674-4959-872e-bb759c7657e1" containerID="a1ce571fbd27752a6d8b6100f0ca85aa4401b2925ec47bedd6a55659b6baa647" exitCode=0 Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.257972 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpldw" event={"ID":"1527ab8a-0674-4959-872e-bb759c7657e1","Type":"ContainerDied","Data":"a1ce571fbd27752a6d8b6100f0ca85aa4401b2925ec47bedd6a55659b6baa647"} Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.258013 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpldw" event={"ID":"1527ab8a-0674-4959-872e-bb759c7657e1","Type":"ContainerDied","Data":"0317e139561546c77bf7d9c99c680fd4f0cb20d79d274b8692b97dc5ef99c974"} Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.258022 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpldw" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.258033 4933 scope.go:117] "RemoveContainer" containerID="a1ce571fbd27752a6d8b6100f0ca85aa4401b2925ec47bedd6a55659b6baa647" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.259460 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.270048 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-69b74fc85f-r9lm8" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.282973 4933 scope.go:117] "RemoveContainer" containerID="6ca74c85d9f13c3e95244b6395305f4a5e97db7d5efcbaf760db1111a67a4393" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.312666 4933 scope.go:117] "RemoveContainer" containerID="f7222373b409b4111d32dc6fa28172c33cda650a0eaeb0e86aa23b1f274a8da5" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.324176 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xpldw"] Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.328341 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.329725 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xpldw"] Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.331695 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.336493 4933 scope.go:117] "RemoveContainer" containerID="a1ce571fbd27752a6d8b6100f0ca85aa4401b2925ec47bedd6a55659b6baa647" Dec 01 09:35:28 crc kubenswrapper[4933]: E1201 09:35:28.337037 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ce571fbd27752a6d8b6100f0ca85aa4401b2925ec47bedd6a55659b6baa647\": container with ID starting with a1ce571fbd27752a6d8b6100f0ca85aa4401b2925ec47bedd6a55659b6baa647 not found: ID does not exist" containerID="a1ce571fbd27752a6d8b6100f0ca85aa4401b2925ec47bedd6a55659b6baa647" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.337078 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1ce571fbd27752a6d8b6100f0ca85aa4401b2925ec47bedd6a55659b6baa647"} err="failed to get container status \"a1ce571fbd27752a6d8b6100f0ca85aa4401b2925ec47bedd6a55659b6baa647\": rpc error: code = NotFound desc = could not find container \"a1ce571fbd27752a6d8b6100f0ca85aa4401b2925ec47bedd6a55659b6baa647\": container with ID starting with a1ce571fbd27752a6d8b6100f0ca85aa4401b2925ec47bedd6a55659b6baa647 not found: ID does not exist" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.337106 4933 scope.go:117] "RemoveContainer" containerID="6ca74c85d9f13c3e95244b6395305f4a5e97db7d5efcbaf760db1111a67a4393" Dec 01 09:35:28 crc kubenswrapper[4933]: E1201 09:35:28.337368 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ca74c85d9f13c3e95244b6395305f4a5e97db7d5efcbaf760db1111a67a4393\": container with ID starting with 6ca74c85d9f13c3e95244b6395305f4a5e97db7d5efcbaf760db1111a67a4393 not found: ID does not exist" containerID="6ca74c85d9f13c3e95244b6395305f4a5e97db7d5efcbaf760db1111a67a4393" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.337408 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ca74c85d9f13c3e95244b6395305f4a5e97db7d5efcbaf760db1111a67a4393"} err="failed to get container status \"6ca74c85d9f13c3e95244b6395305f4a5e97db7d5efcbaf760db1111a67a4393\": rpc error: code = NotFound desc = could not find container \"6ca74c85d9f13c3e95244b6395305f4a5e97db7d5efcbaf760db1111a67a4393\": container with ID starting with 6ca74c85d9f13c3e95244b6395305f4a5e97db7d5efcbaf760db1111a67a4393 not found: ID does not exist" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.337438 4933 scope.go:117] "RemoveContainer" containerID="f7222373b409b4111d32dc6fa28172c33cda650a0eaeb0e86aa23b1f274a8da5" Dec 01 09:35:28 crc kubenswrapper[4933]: E1201 09:35:28.337813 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7222373b409b4111d32dc6fa28172c33cda650a0eaeb0e86aa23b1f274a8da5\": container with ID starting with f7222373b409b4111d32dc6fa28172c33cda650a0eaeb0e86aa23b1f274a8da5 not found: ID does not exist" containerID="f7222373b409b4111d32dc6fa28172c33cda650a0eaeb0e86aa23b1f274a8da5" Dec 01 09:35:28 crc kubenswrapper[4933]: I1201 09:35:28.337844 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7222373b409b4111d32dc6fa28172c33cda650a0eaeb0e86aa23b1f274a8da5"} err="failed to get container status \"f7222373b409b4111d32dc6fa28172c33cda650a0eaeb0e86aa23b1f274a8da5\": rpc error: code = NotFound desc = could not find container \"f7222373b409b4111d32dc6fa28172c33cda650a0eaeb0e86aa23b1f274a8da5\": container with ID starting with f7222373b409b4111d32dc6fa28172c33cda650a0eaeb0e86aa23b1f274a8da5 not found: ID does not exist" Dec 01 09:35:29 crc kubenswrapper[4933]: I1201 09:35:29.675856 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1527ab8a-0674-4959-872e-bb759c7657e1" path="/var/lib/kubelet/pods/1527ab8a-0674-4959-872e-bb759c7657e1/volumes" Dec 01 09:35:30 crc kubenswrapper[4933]: I1201 09:35:30.403126 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pjpdf"] Dec 01 09:35:31 crc kubenswrapper[4933]: I1201 09:35:31.278900 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pjpdf" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" containerName="registry-server" containerID="cri-o://3b27e14d234acf537deda29ad786b4a1bbb4ae18bd7442d0e64e56bec38b4960" gracePeriod=2 Dec 01 09:35:34 crc kubenswrapper[4933]: I1201 09:35:34.963408 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-5tk2j" podUID="194f9dd3-85db-4303-ad0e-180d0e160da0" containerName="registry-server" probeResult="failure" output=< Dec 01 09:35:34 crc kubenswrapper[4933]: timeout: failed to connect service ":50051" within 1s Dec 01 09:35:34 crc kubenswrapper[4933]: > Dec 01 09:35:35 crc kubenswrapper[4933]: I1201 09:35:35.307203 4933 generic.go:334] "Generic (PLEG): container finished" podID="447e5be4-e974-45c7-a58a-2efddd4bd49c" containerID="3b27e14d234acf537deda29ad786b4a1bbb4ae18bd7442d0e64e56bec38b4960" exitCode=0 Dec 01 09:35:35 crc kubenswrapper[4933]: I1201 09:35:35.307297 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjpdf" event={"ID":"447e5be4-e974-45c7-a58a-2efddd4bd49c","Type":"ContainerDied","Data":"3b27e14d234acf537deda29ad786b4a1bbb4ae18bd7442d0e64e56bec38b4960"} Dec 01 09:35:35 crc kubenswrapper[4933]: I1201 09:35:35.558158 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:35:35 crc kubenswrapper[4933]: I1201 09:35:35.677779 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt2mq\" (UniqueName: \"kubernetes.io/projected/447e5be4-e974-45c7-a58a-2efddd4bd49c-kube-api-access-kt2mq\") pod \"447e5be4-e974-45c7-a58a-2efddd4bd49c\" (UID: \"447e5be4-e974-45c7-a58a-2efddd4bd49c\") " Dec 01 09:35:35 crc kubenswrapper[4933]: I1201 09:35:35.677919 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/447e5be4-e974-45c7-a58a-2efddd4bd49c-catalog-content\") pod \"447e5be4-e974-45c7-a58a-2efddd4bd49c\" (UID: \"447e5be4-e974-45c7-a58a-2efddd4bd49c\") " Dec 01 09:35:35 crc kubenswrapper[4933]: I1201 09:35:35.677984 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/447e5be4-e974-45c7-a58a-2efddd4bd49c-utilities\") pod \"447e5be4-e974-45c7-a58a-2efddd4bd49c\" (UID: \"447e5be4-e974-45c7-a58a-2efddd4bd49c\") " Dec 01 09:35:35 crc kubenswrapper[4933]: I1201 09:35:35.678989 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/447e5be4-e974-45c7-a58a-2efddd4bd49c-utilities" (OuterVolumeSpecName: "utilities") pod "447e5be4-e974-45c7-a58a-2efddd4bd49c" (UID: "447e5be4-e974-45c7-a58a-2efddd4bd49c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:35:35 crc kubenswrapper[4933]: I1201 09:35:35.686287 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/447e5be4-e974-45c7-a58a-2efddd4bd49c-kube-api-access-kt2mq" (OuterVolumeSpecName: "kube-api-access-kt2mq") pod "447e5be4-e974-45c7-a58a-2efddd4bd49c" (UID: "447e5be4-e974-45c7-a58a-2efddd4bd49c"). InnerVolumeSpecName "kube-api-access-kt2mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:35:35 crc kubenswrapper[4933]: I1201 09:35:35.779581 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/447e5be4-e974-45c7-a58a-2efddd4bd49c-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:35 crc kubenswrapper[4933]: I1201 09:35:35.779615 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt2mq\" (UniqueName: \"kubernetes.io/projected/447e5be4-e974-45c7-a58a-2efddd4bd49c-kube-api-access-kt2mq\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:35 crc kubenswrapper[4933]: I1201 09:35:35.787751 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/447e5be4-e974-45c7-a58a-2efddd4bd49c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "447e5be4-e974-45c7-a58a-2efddd4bd49c" (UID: "447e5be4-e974-45c7-a58a-2efddd4bd49c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:35:35 crc kubenswrapper[4933]: I1201 09:35:35.880593 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/447e5be4-e974-45c7-a58a-2efddd4bd49c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.186315 4933 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 09:35:36 crc kubenswrapper[4933]: E1201 09:35:36.186826 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1527ab8a-0674-4959-872e-bb759c7657e1" containerName="extract-content" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.186840 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="1527ab8a-0674-4959-872e-bb759c7657e1" containerName="extract-content" Dec 01 09:35:36 crc kubenswrapper[4933]: E1201 09:35:36.186857 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1527ab8a-0674-4959-872e-bb759c7657e1" containerName="extract-utilities" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.186863 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="1527ab8a-0674-4959-872e-bb759c7657e1" containerName="extract-utilities" Dec 01 09:35:36 crc kubenswrapper[4933]: E1201 09:35:36.186878 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" containerName="registry-server" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.186884 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" containerName="registry-server" Dec 01 09:35:36 crc kubenswrapper[4933]: E1201 09:35:36.186893 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" containerName="extract-utilities" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.186899 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" containerName="extract-utilities" Dec 01 09:35:36 crc kubenswrapper[4933]: E1201 09:35:36.186913 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" containerName="extract-content" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.186919 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" containerName="extract-content" Dec 01 09:35:36 crc kubenswrapper[4933]: E1201 09:35:36.186928 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1527ab8a-0674-4959-872e-bb759c7657e1" containerName="registry-server" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.186934 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="1527ab8a-0674-4959-872e-bb759c7657e1" containerName="registry-server" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.187030 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="1527ab8a-0674-4959-872e-bb759c7657e1" containerName="registry-server" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.187044 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" containerName="registry-server" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.187406 4933 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.187547 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.187658 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541" gracePeriod=15 Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.187699 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901" gracePeriod=15 Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.187767 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382" gracePeriod=15 Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.187787 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5" gracePeriod=15 Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.187662 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5" gracePeriod=15 Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188224 4933 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 09:35:36 crc kubenswrapper[4933]: E1201 09:35:36.188425 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188438 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 09:35:36 crc kubenswrapper[4933]: E1201 09:35:36.188448 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188455 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 09:35:36 crc kubenswrapper[4933]: E1201 09:35:36.188467 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188475 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 09:35:36 crc kubenswrapper[4933]: E1201 09:35:36.188489 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188495 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 01 09:35:36 crc kubenswrapper[4933]: E1201 09:35:36.188507 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188514 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 09:35:36 crc kubenswrapper[4933]: E1201 09:35:36.188525 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188541 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 09:35:36 crc kubenswrapper[4933]: E1201 09:35:36.188552 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188559 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188696 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188707 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188717 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188727 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188737 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.188965 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.285335 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.285734 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.285870 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.285946 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.286019 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.316035 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjpdf" event={"ID":"447e5be4-e974-45c7-a58a-2efddd4bd49c","Type":"ContainerDied","Data":"62cc5fd668b5c7f32f62c30623ea820158d9da8866368d789c685705bfe28549"} Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.316115 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjpdf" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.316167 4933 scope.go:117] "RemoveContainer" containerID="3b27e14d234acf537deda29ad786b4a1bbb4ae18bd7442d0e64e56bec38b4960" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.317520 4933 status_manager.go:851] "Failed to get status for pod" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" pod="openshift-marketplace/redhat-operators-pjpdf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pjpdf\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.318080 4933 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.355047 4933 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.356781 4933 status_manager.go:851] "Failed to get status for pod" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" pod="openshift-marketplace/redhat-operators-pjpdf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pjpdf\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.358205 4933 scope.go:117] "RemoveContainer" containerID="31eb8744d5282dc0122a6c4b8b086dd7b462ac11867fe060cd7b4f43244864d4" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.387127 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.387209 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.387243 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.387280 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.387319 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.387288 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.387385 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.387389 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.387343 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.387357 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.387559 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.387608 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.387817 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.393853 4933 scope.go:117] "RemoveContainer" containerID="5110bfbf6564a5fcf28396a6f0b1c7603f7022d80bce6355980e28b6b89d99f3" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.488673 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.488800 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.488826 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.488851 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.488903 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:36 crc kubenswrapper[4933]: I1201 09:35:36.488944 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:37 crc kubenswrapper[4933]: I1201 09:35:37.322074 4933 generic.go:334] "Generic (PLEG): container finished" podID="89266d5c-bcba-413a-9ab0-0901d25cd528" containerID="fb3560c3c6dfb3d649eec5c0409e20dedce6d9ca9c94e6e4e24fec340e96fe76" exitCode=0 Dec 01 09:35:37 crc kubenswrapper[4933]: I1201 09:35:37.322176 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"89266d5c-bcba-413a-9ab0-0901d25cd528","Type":"ContainerDied","Data":"fb3560c3c6dfb3d649eec5c0409e20dedce6d9ca9c94e6e4e24fec340e96fe76"} Dec 01 09:35:37 crc kubenswrapper[4933]: I1201 09:35:37.323973 4933 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:37 crc kubenswrapper[4933]: I1201 09:35:37.324472 4933 status_manager.go:851] "Failed to get status for pod" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" pod="openshift-marketplace/redhat-operators-pjpdf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pjpdf\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:37 crc kubenswrapper[4933]: I1201 09:35:37.324782 4933 status_manager.go:851] "Failed to get status for pod" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:37 crc kubenswrapper[4933]: I1201 09:35:37.325527 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 09:35:37 crc kubenswrapper[4933]: I1201 09:35:37.327001 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 09:35:37 crc kubenswrapper[4933]: I1201 09:35:37.327833 4933 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5" exitCode=0 Dec 01 09:35:37 crc kubenswrapper[4933]: I1201 09:35:37.327868 4933 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382" exitCode=0 Dec 01 09:35:37 crc kubenswrapper[4933]: I1201 09:35:37.327879 4933 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901" exitCode=0 Dec 01 09:35:37 crc kubenswrapper[4933]: I1201 09:35:37.327890 4933 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5" exitCode=2 Dec 01 09:35:37 crc kubenswrapper[4933]: I1201 09:35:37.328013 4933 scope.go:117] "RemoveContainer" containerID="c3bc29321f659a97403015568d230e872dcb6c1c4835252b175658f929ceccba" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.342871 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.656239 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.657228 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.657946 4933 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.658198 4933 status_manager.go:851] "Failed to get status for pod" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" pod="openshift-marketplace/redhat-operators-pjpdf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pjpdf\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.658688 4933 status_manager.go:851] "Failed to get status for pod" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.659241 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.659829 4933 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.660274 4933 status_manager.go:851] "Failed to get status for pod" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.660531 4933 status_manager.go:851] "Failed to get status for pod" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" pod="openshift-marketplace/redhat-operators-pjpdf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pjpdf\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.821297 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.821414 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89266d5c-bcba-413a-9ab0-0901d25cd528-kubelet-dir\") pod \"89266d5c-bcba-413a-9ab0-0901d25cd528\" (UID: \"89266d5c-bcba-413a-9ab0-0901d25cd528\") " Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.821437 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.821470 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89266d5c-bcba-413a-9ab0-0901d25cd528-var-lock\") pod \"89266d5c-bcba-413a-9ab0-0901d25cd528\" (UID: \"89266d5c-bcba-413a-9ab0-0901d25cd528\") " Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.821490 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.821531 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89266d5c-bcba-413a-9ab0-0901d25cd528-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "89266d5c-bcba-413a-9ab0-0901d25cd528" (UID: "89266d5c-bcba-413a-9ab0-0901d25cd528"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.821570 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89266d5c-bcba-413a-9ab0-0901d25cd528-var-lock" (OuterVolumeSpecName: "var-lock") pod "89266d5c-bcba-413a-9ab0-0901d25cd528" (UID: "89266d5c-bcba-413a-9ab0-0901d25cd528"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.821584 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.821545 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.821562 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.821666 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89266d5c-bcba-413a-9ab0-0901d25cd528-kube-api-access\") pod \"89266d5c-bcba-413a-9ab0-0901d25cd528\" (UID: \"89266d5c-bcba-413a-9ab0-0901d25cd528\") " Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.822217 4933 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89266d5c-bcba-413a-9ab0-0901d25cd528-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.822245 4933 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.822261 4933 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89266d5c-bcba-413a-9ab0-0901d25cd528-var-lock\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.822271 4933 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.822289 4933 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.828449 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89266d5c-bcba-413a-9ab0-0901d25cd528-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "89266d5c-bcba-413a-9ab0-0901d25cd528" (UID: "89266d5c-bcba-413a-9ab0-0901d25cd528"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:35:38 crc kubenswrapper[4933]: I1201 09:35:38.924431 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89266d5c-bcba-413a-9ab0-0901d25cd528-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.354838 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.355854 4933 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541" exitCode=0 Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.355952 4933 scope.go:117] "RemoveContainer" containerID="4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.356023 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.357473 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"89266d5c-bcba-413a-9ab0-0901d25cd528","Type":"ContainerDied","Data":"f7747b2a40c71bc17968b2cd641df1bfcd8c41b52ab50fd6fe342fed405a4b01"} Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.357522 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7747b2a40c71bc17968b2cd641df1bfcd8c41b52ab50fd6fe342fed405a4b01" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.357601 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.376545 4933 scope.go:117] "RemoveContainer" containerID="461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.376631 4933 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.377727 4933 status_manager.go:851] "Failed to get status for pod" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" pod="openshift-marketplace/redhat-operators-pjpdf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pjpdf\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.378570 4933 status_manager.go:851] "Failed to get status for pod" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.379858 4933 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.380231 4933 status_manager.go:851] "Failed to get status for pod" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.380930 4933 status_manager.go:851] "Failed to get status for pod" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" pod="openshift-marketplace/redhat-operators-pjpdf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pjpdf\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.393325 4933 scope.go:117] "RemoveContainer" containerID="559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.410148 4933 scope.go:117] "RemoveContainer" containerID="0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.425603 4933 scope.go:117] "RemoveContainer" containerID="779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.444300 4933 scope.go:117] "RemoveContainer" containerID="9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.463558 4933 scope.go:117] "RemoveContainer" containerID="4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5" Dec 01 09:35:39 crc kubenswrapper[4933]: E1201 09:35:39.464084 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\": container with ID starting with 4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5 not found: ID does not exist" containerID="4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.464135 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5"} err="failed to get container status \"4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\": rpc error: code = NotFound desc = could not find container \"4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5\": container with ID starting with 4a80ca3576b1e5f93ff0669054f347ef0057b4221b422cbde2dafc315189dee5 not found: ID does not exist" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.464169 4933 scope.go:117] "RemoveContainer" containerID="461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382" Dec 01 09:35:39 crc kubenswrapper[4933]: E1201 09:35:39.464742 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\": container with ID starting with 461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382 not found: ID does not exist" containerID="461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.464797 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382"} err="failed to get container status \"461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\": rpc error: code = NotFound desc = could not find container \"461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382\": container with ID starting with 461ecd39437f20745493734c61dba3c49b4bfe574aac777fccb4a42e794e1382 not found: ID does not exist" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.464838 4933 scope.go:117] "RemoveContainer" containerID="559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901" Dec 01 09:35:39 crc kubenswrapper[4933]: E1201 09:35:39.465373 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\": container with ID starting with 559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901 not found: ID does not exist" containerID="559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.465403 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901"} err="failed to get container status \"559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\": rpc error: code = NotFound desc = could not find container \"559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901\": container with ID starting with 559e6c719c15d20c8f17c739589b94ce1d12c91b0557f3c7b76e9d9ba75fb901 not found: ID does not exist" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.465420 4933 scope.go:117] "RemoveContainer" containerID="0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5" Dec 01 09:35:39 crc kubenswrapper[4933]: E1201 09:35:39.465716 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\": container with ID starting with 0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5 not found: ID does not exist" containerID="0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.465739 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5"} err="failed to get container status \"0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\": rpc error: code = NotFound desc = could not find container \"0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5\": container with ID starting with 0db9e927e8b36cafd2b65325574c72733262576a775b8989f12fa99a0f0d56c5 not found: ID does not exist" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.465755 4933 scope.go:117] "RemoveContainer" containerID="779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541" Dec 01 09:35:39 crc kubenswrapper[4933]: E1201 09:35:39.466024 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\": container with ID starting with 779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541 not found: ID does not exist" containerID="779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.466047 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541"} err="failed to get container status \"779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\": rpc error: code = NotFound desc = could not find container \"779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541\": container with ID starting with 779e430f0e6e75ec09ccfa9601726936ee5acda9b58d25aa88e9e8b38edc9541 not found: ID does not exist" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.466059 4933 scope.go:117] "RemoveContainer" containerID="9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2" Dec 01 09:35:39 crc kubenswrapper[4933]: E1201 09:35:39.466452 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\": container with ID starting with 9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2 not found: ID does not exist" containerID="9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.466474 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2"} err="failed to get container status \"9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\": rpc error: code = NotFound desc = could not find container \"9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2\": container with ID starting with 9d37c3b56c4b009940ca661c8aa24ecec4bb8a1b60934c2228ed3e0e3628b3d2 not found: ID does not exist" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.671128 4933 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.671709 4933 status_manager.go:851] "Failed to get status for pod" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" pod="openshift-marketplace/redhat-operators-pjpdf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pjpdf\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.672802 4933 status_manager.go:851] "Failed to get status for pod" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:39 crc kubenswrapper[4933]: I1201 09:35:39.676541 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 01 09:35:41 crc kubenswrapper[4933]: E1201 09:35:41.231940 4933 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:41 crc kubenswrapper[4933]: I1201 09:35:41.232832 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:41 crc kubenswrapper[4933]: W1201 09:35:41.254654 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-0b34b80d8b896ea3aa32e46b93c7011169581309cc4d4cc8956ff1ef56e2eb2b WatchSource:0}: Error finding container 0b34b80d8b896ea3aa32e46b93c7011169581309cc4d4cc8956ff1ef56e2eb2b: Status 404 returned error can't find the container with id 0b34b80d8b896ea3aa32e46b93c7011169581309cc4d4cc8956ff1ef56e2eb2b Dec 01 09:35:41 crc kubenswrapper[4933]: E1201 09:35:41.258038 4933 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.142:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d0dbe98be7df3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 09:35:41.256670707 +0000 UTC m=+231.898394322,LastTimestamp:2025-12-01 09:35:41.256670707 +0000 UTC m=+231.898394322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 09:35:41 crc kubenswrapper[4933]: I1201 09:35:41.378368 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0b34b80d8b896ea3aa32e46b93c7011169581309cc4d4cc8956ff1ef56e2eb2b"} Dec 01 09:35:42 crc kubenswrapper[4933]: I1201 09:35:42.387760 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d9444340de280aa97ab8f54e051d174e364348fc5a5e33ad0be488f2ce01cda7"} Dec 01 09:35:42 crc kubenswrapper[4933]: E1201 09:35:42.388917 4933 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:42 crc kubenswrapper[4933]: I1201 09:35:42.388920 4933 status_manager.go:851] "Failed to get status for pod" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" pod="openshift-marketplace/redhat-operators-pjpdf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pjpdf\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:42 crc kubenswrapper[4933]: I1201 09:35:42.389281 4933 status_manager.go:851] "Failed to get status for pod" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:42 crc kubenswrapper[4933]: E1201 09:35:42.579737 4933 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:42 crc kubenswrapper[4933]: E1201 09:35:42.580338 4933 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:42 crc kubenswrapper[4933]: E1201 09:35:42.580743 4933 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:42 crc kubenswrapper[4933]: E1201 09:35:42.581014 4933 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:42 crc kubenswrapper[4933]: E1201 09:35:42.581306 4933 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:42 crc kubenswrapper[4933]: I1201 09:35:42.581363 4933 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 01 09:35:42 crc kubenswrapper[4933]: E1201 09:35:42.581637 4933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="200ms" Dec 01 09:35:42 crc kubenswrapper[4933]: E1201 09:35:42.782503 4933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="400ms" Dec 01 09:35:43 crc kubenswrapper[4933]: E1201 09:35:43.183904 4933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="800ms" Dec 01 09:35:43 crc kubenswrapper[4933]: E1201 09:35:43.394127 4933 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:35:43 crc kubenswrapper[4933]: E1201 09:35:43.985140 4933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="1.6s" Dec 01 09:35:45 crc kubenswrapper[4933]: E1201 09:35:45.587241 4933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="3.2s" Dec 01 09:35:46 crc kubenswrapper[4933]: E1201 09:35:46.281547 4933 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.142:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d0dbe98be7df3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 09:35:41.256670707 +0000 UTC m=+231.898394322,LastTimestamp:2025-12-01 09:35:41.256670707 +0000 UTC m=+231.898394322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 09:35:48 crc kubenswrapper[4933]: E1201 09:35:48.788208 4933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="6.4s" Dec 01 09:35:49 crc kubenswrapper[4933]: I1201 09:35:49.671403 4933 status_manager.go:851] "Failed to get status for pod" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" pod="openshift-marketplace/redhat-operators-pjpdf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pjpdf\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:49 crc kubenswrapper[4933]: I1201 09:35:49.672047 4933 status_manager.go:851] "Failed to get status for pod" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:50 crc kubenswrapper[4933]: I1201 09:35:50.667486 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:50 crc kubenswrapper[4933]: I1201 09:35:50.669159 4933 status_manager.go:851] "Failed to get status for pod" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" pod="openshift-marketplace/redhat-operators-pjpdf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pjpdf\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:50 crc kubenswrapper[4933]: I1201 09:35:50.669738 4933 status_manager.go:851] "Failed to get status for pod" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:50 crc kubenswrapper[4933]: I1201 09:35:50.686603 4933 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a680ea2b-148f-406d-9d17-4a5a953cbe5b" Dec 01 09:35:50 crc kubenswrapper[4933]: I1201 09:35:50.686662 4933 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a680ea2b-148f-406d-9d17-4a5a953cbe5b" Dec 01 09:35:50 crc kubenswrapper[4933]: E1201 09:35:50.687461 4933 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:50 crc kubenswrapper[4933]: I1201 09:35:50.688172 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:50 crc kubenswrapper[4933]: W1201 09:35:50.709374 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-7729e9621a299efd3381eb055fbe06c3a75e05cbc86ad2630242a71176a4d69b WatchSource:0}: Error finding container 7729e9621a299efd3381eb055fbe06c3a75e05cbc86ad2630242a71176a4d69b: Status 404 returned error can't find the container with id 7729e9621a299efd3381eb055fbe06c3a75e05cbc86ad2630242a71176a4d69b Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.444450 4933 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="b59a22a2e3d65f7d41f425cdcd540350a0495bd20643313286c33c16076fc9c5" exitCode=0 Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.444577 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"b59a22a2e3d65f7d41f425cdcd540350a0495bd20643313286c33c16076fc9c5"} Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.444689 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7729e9621a299efd3381eb055fbe06c3a75e05cbc86ad2630242a71176a4d69b"} Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.445240 4933 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a680ea2b-148f-406d-9d17-4a5a953cbe5b" Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.445263 4933 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a680ea2b-148f-406d-9d17-4a5a953cbe5b" Dec 01 09:35:51 crc kubenswrapper[4933]: E1201 09:35:51.445999 4933 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.446079 4933 status_manager.go:851] "Failed to get status for pod" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.446429 4933 status_manager.go:851] "Failed to get status for pod" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" pod="openshift-marketplace/redhat-operators-pjpdf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pjpdf\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.448253 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.448330 4933 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66" exitCode=1 Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.448372 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66"} Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.449061 4933 scope.go:117] "RemoveContainer" containerID="a4cd0f71559dba655a78b3ab2537fd4346eafa48352d1422a221bfc46f9aaf66" Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.449855 4933 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.450397 4933 status_manager.go:851] "Failed to get status for pod" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" pod="openshift-marketplace/redhat-operators-pjpdf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pjpdf\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:51 crc kubenswrapper[4933]: I1201 09:35:51.450744 4933 status_manager.go:851] "Failed to get status for pod" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Dec 01 09:35:51 crc kubenswrapper[4933]: E1201 09:35:51.704325 4933 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" volumeName="registry-storage" Dec 01 09:35:52 crc kubenswrapper[4933]: I1201 09:35:52.459197 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7021b0f8c7e59e2577b8f7aa80b84365c8e3d496e040e9904457e1a130ddb99a"} Dec 01 09:35:52 crc kubenswrapper[4933]: I1201 09:35:52.459877 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fe193d80390e73e3f674a42df15f9f39b20a356f4b777ec4f2d0c9774954a94d"} Dec 01 09:35:52 crc kubenswrapper[4933]: I1201 09:35:52.459893 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"62b3589d603702bf89ae35e98810d21ae9fd0b23ee1e7d99f9cef3ef98c6993e"} Dec 01 09:35:52 crc kubenswrapper[4933]: I1201 09:35:52.463143 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 01 09:35:52 crc kubenswrapper[4933]: I1201 09:35:52.463240 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2d393d128dc42d82a3ec69301a6ea58085d1a8e0e1384c729438c040f3238452"} Dec 01 09:35:53 crc kubenswrapper[4933]: I1201 09:35:53.472895 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"285b8c951597961caabb2af105aad80fb8999e75d45b0e55bb3800199f004d4b"} Dec 01 09:35:53 crc kubenswrapper[4933]: I1201 09:35:53.472949 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ad7d3bf49352b801596ff8a49a3e63cb261f5056b21e023c3bf41e754a886f14"} Dec 01 09:35:53 crc kubenswrapper[4933]: I1201 09:35:53.473193 4933 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a680ea2b-148f-406d-9d17-4a5a953cbe5b" Dec 01 09:35:53 crc kubenswrapper[4933]: I1201 09:35:53.473207 4933 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a680ea2b-148f-406d-9d17-4a5a953cbe5b" Dec 01 09:35:53 crc kubenswrapper[4933]: I1201 09:35:53.473421 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:54 crc kubenswrapper[4933]: I1201 09:35:54.236160 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:35:54 crc kubenswrapper[4933]: I1201 09:35:54.250676 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:35:54 crc kubenswrapper[4933]: I1201 09:35:54.479164 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:35:55 crc kubenswrapper[4933]: I1201 09:35:55.689111 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:55 crc kubenswrapper[4933]: I1201 09:35:55.689244 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:55 crc kubenswrapper[4933]: I1201 09:35:55.696572 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:58 crc kubenswrapper[4933]: I1201 09:35:58.485090 4933 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:58 crc kubenswrapper[4933]: I1201 09:35:58.501097 4933 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a680ea2b-148f-406d-9d17-4a5a953cbe5b" Dec 01 09:35:58 crc kubenswrapper[4933]: I1201 09:35:58.501131 4933 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a680ea2b-148f-406d-9d17-4a5a953cbe5b" Dec 01 09:35:58 crc kubenswrapper[4933]: I1201 09:35:58.504974 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:35:59 crc kubenswrapper[4933]: I1201 09:35:59.505552 4933 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a680ea2b-148f-406d-9d17-4a5a953cbe5b" Dec 01 09:35:59 crc kubenswrapper[4933]: I1201 09:35:59.505581 4933 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a680ea2b-148f-406d-9d17-4a5a953cbe5b" Dec 01 09:35:59 crc kubenswrapper[4933]: I1201 09:35:59.680839 4933 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d69eb31c-8304-4217-aee9-5d639e2ec120" Dec 01 09:36:08 crc kubenswrapper[4933]: I1201 09:36:08.120773 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 01 09:36:08 crc kubenswrapper[4933]: I1201 09:36:08.537219 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 01 09:36:08 crc kubenswrapper[4933]: I1201 09:36:08.553613 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 01 09:36:08 crc kubenswrapper[4933]: I1201 09:36:08.638780 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 01 09:36:09 crc kubenswrapper[4933]: I1201 09:36:09.156144 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 01 09:36:09 crc kubenswrapper[4933]: I1201 09:36:09.194048 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 01 09:36:09 crc kubenswrapper[4933]: I1201 09:36:09.267615 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 01 09:36:09 crc kubenswrapper[4933]: I1201 09:36:09.271963 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 09:36:09 crc kubenswrapper[4933]: I1201 09:36:09.317084 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 01 09:36:09 crc kubenswrapper[4933]: I1201 09:36:09.403870 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 01 09:36:09 crc kubenswrapper[4933]: I1201 09:36:09.658814 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 01 09:36:09 crc kubenswrapper[4933]: I1201 09:36:09.772971 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 01 09:36:09 crc kubenswrapper[4933]: I1201 09:36:09.852425 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 01 09:36:10 crc kubenswrapper[4933]: I1201 09:36:10.138684 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 01 09:36:10 crc kubenswrapper[4933]: I1201 09:36:10.159458 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 01 09:36:10 crc kubenswrapper[4933]: I1201 09:36:10.193415 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 01 09:36:10 crc kubenswrapper[4933]: I1201 09:36:10.226017 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:36:10 crc kubenswrapper[4933]: I1201 09:36:10.568768 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 01 09:36:10 crc kubenswrapper[4933]: I1201 09:36:10.654576 4933 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 01 09:36:10 crc kubenswrapper[4933]: I1201 09:36:10.701525 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 01 09:36:10 crc kubenswrapper[4933]: I1201 09:36:10.920212 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.076952 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.121716 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.257381 4933 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.263542 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/redhat-operators-pjpdf"] Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.263650 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.264832 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.272114 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.288130 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.288110084 podStartE2EDuration="13.288110084s" podCreationTimestamp="2025-12-01 09:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:36:11.286890964 +0000 UTC m=+261.928614579" watchObservedRunningTime="2025-12-01 09:36:11.288110084 +0000 UTC m=+261.929833699" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.294590 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.372690 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.412564 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.439775 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.608392 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.618425 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.659609 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.676248 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="447e5be4-e974-45c7-a58a-2efddd4bd49c" path="/var/lib/kubelet/pods/447e5be4-e974-45c7-a58a-2efddd4bd49c/volumes" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.930447 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 01 09:36:11 crc kubenswrapper[4933]: I1201 09:36:11.989237 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.001955 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.004821 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.006208 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.066493 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.111183 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.200735 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.228853 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.270794 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.280199 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.287018 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.324481 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.331005 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.359040 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.407332 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.535357 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.599159 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.602566 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.803044 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.830721 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.914952 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.915246 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 01 09:36:12 crc kubenswrapper[4933]: I1201 09:36:12.957765 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.025109 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.104295 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.135869 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.264263 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.287295 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.512972 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.587780 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.699638 4933 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.709147 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.719290 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.747867 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.820920 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.891685 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.906887 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.907941 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.925358 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.927732 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.928189 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.940714 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 01 09:36:13 crc kubenswrapper[4933]: I1201 09:36:13.971234 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.040270 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.079810 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.081215 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.224381 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.252902 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.258194 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.295865 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.347511 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.363104 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.390436 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.410188 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.438482 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.457942 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.463461 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.463521 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.553433 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.563728 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.669174 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.685807 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.699497 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.703353 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.724870 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.752560 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.812926 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.855363 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.872347 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.895902 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 01 09:36:14 crc kubenswrapper[4933]: I1201 09:36:14.974065 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.140716 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.142588 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.197561 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.238634 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.243913 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.344544 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.387677 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.392333 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.444899 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.457169 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.470372 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.547879 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.645898 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.731497 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.768156 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.909555 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.940706 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.953721 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.988592 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.988595 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 01 09:36:15 crc kubenswrapper[4933]: I1201 09:36:15.989262 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.046800 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.056834 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.094414 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.111058 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.124800 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.309084 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.333015 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.352979 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.366615 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.588147 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.743634 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.753002 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.777672 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.838254 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.866575 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.950762 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 01 09:36:16 crc kubenswrapper[4933]: I1201 09:36:16.991641 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.002166 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.116695 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.163780 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.333004 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.333662 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.345757 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.408936 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.532106 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.642819 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.662986 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.678681 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.705284 4933 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.794783 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.866297 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.915489 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.916722 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 01 09:36:17 crc kubenswrapper[4933]: I1201 09:36:17.966072 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.018561 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.042615 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.047275 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.073483 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.080596 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.089328 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.136603 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.239610 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.259575 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.309631 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.361517 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.472263 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.562432 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.658083 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.665037 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.712228 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.806404 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.892068 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.928861 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 01 09:36:18 crc kubenswrapper[4933]: I1201 09:36:18.989238 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.136447 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.176563 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.381891 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.476442 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.626832 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.641970 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.667242 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.678982 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.707068 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.724298 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.726328 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.764270 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.837258 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.848338 4933 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.855194 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.960105 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 01 09:36:19 crc kubenswrapper[4933]: I1201 09:36:19.984022 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.219190 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.219663 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.254376 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.255721 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.275511 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.355677 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.359829 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.374828 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.401586 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.567618 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.585818 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.600113 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.707140 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-q5ch5" podUID="64c1704c-f0a1-4401-bc2d-46febb3ba534" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.714028 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.825530 4933 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 01 09:36:20 crc kubenswrapper[4933]: I1201 09:36:20.857158 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 01 09:36:21 crc kubenswrapper[4933]: E1201 09:36:21.459075 4933 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.792s" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.511720 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.511894 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.512133 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.543177 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.551208 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.555762 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.555851 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.558689 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.568395 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.587848 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.605200 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.615277 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.658414 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.736072 4933 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.736409 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d9444340de280aa97ab8f54e051d174e364348fc5a5e33ad0be488f2ce01cda7" gracePeriod=5 Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.930104 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 01 09:36:21 crc kubenswrapper[4933]: I1201 09:36:21.969109 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 01 09:36:22 crc kubenswrapper[4933]: I1201 09:36:22.049857 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 01 09:36:22 crc kubenswrapper[4933]: I1201 09:36:22.084404 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 01 09:36:22 crc kubenswrapper[4933]: I1201 09:36:22.244568 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 01 09:36:22 crc kubenswrapper[4933]: I1201 09:36:22.437521 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 01 09:36:22 crc kubenswrapper[4933]: I1201 09:36:22.565239 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 01 09:36:22 crc kubenswrapper[4933]: I1201 09:36:22.699613 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 01 09:36:22 crc kubenswrapper[4933]: I1201 09:36:22.700002 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 01 09:36:22 crc kubenswrapper[4933]: I1201 09:36:22.719095 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 09:36:22 crc kubenswrapper[4933]: I1201 09:36:22.776860 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 01 09:36:22 crc kubenswrapper[4933]: I1201 09:36:22.940293 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 01 09:36:22 crc kubenswrapper[4933]: I1201 09:36:22.941898 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 01 09:36:22 crc kubenswrapper[4933]: I1201 09:36:22.998833 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 01 09:36:23 crc kubenswrapper[4933]: I1201 09:36:23.173083 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 01 09:36:23 crc kubenswrapper[4933]: I1201 09:36:23.240555 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 01 09:36:23 crc kubenswrapper[4933]: I1201 09:36:23.365086 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 01 09:36:23 crc kubenswrapper[4933]: I1201 09:36:23.375072 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 01 09:36:23 crc kubenswrapper[4933]: I1201 09:36:23.395617 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 01 09:36:23 crc kubenswrapper[4933]: I1201 09:36:23.517928 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 01 09:36:23 crc kubenswrapper[4933]: I1201 09:36:23.548430 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 01 09:36:23 crc kubenswrapper[4933]: I1201 09:36:23.645007 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 01 09:36:23 crc kubenswrapper[4933]: I1201 09:36:23.707624 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 01 09:36:23 crc kubenswrapper[4933]: I1201 09:36:23.773704 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 01 09:36:23 crc kubenswrapper[4933]: I1201 09:36:23.942826 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 01 09:36:24 crc kubenswrapper[4933]: I1201 09:36:24.154624 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 01 09:36:24 crc kubenswrapper[4933]: I1201 09:36:24.607996 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 01 09:36:24 crc kubenswrapper[4933]: I1201 09:36:24.764628 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.324933 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.325606 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.358755 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.359041 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.359153 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.358890 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.359089 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.359231 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.359242 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.359297 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.359494 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.359736 4933 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.359757 4933 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.359769 4933 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.359780 4933 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.368719 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.461061 4933 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.506866 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.506921 4933 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d9444340de280aa97ab8f54e051d174e364348fc5a5e33ad0be488f2ce01cda7" exitCode=137 Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.506973 4933 scope.go:117] "RemoveContainer" containerID="d9444340de280aa97ab8f54e051d174e364348fc5a5e33ad0be488f2ce01cda7" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.507026 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.526989 4933 scope.go:117] "RemoveContainer" containerID="d9444340de280aa97ab8f54e051d174e364348fc5a5e33ad0be488f2ce01cda7" Dec 01 09:36:27 crc kubenswrapper[4933]: E1201 09:36:27.527491 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9444340de280aa97ab8f54e051d174e364348fc5a5e33ad0be488f2ce01cda7\": container with ID starting with d9444340de280aa97ab8f54e051d174e364348fc5a5e33ad0be488f2ce01cda7 not found: ID does not exist" containerID="d9444340de280aa97ab8f54e051d174e364348fc5a5e33ad0be488f2ce01cda7" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.527538 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9444340de280aa97ab8f54e051d174e364348fc5a5e33ad0be488f2ce01cda7"} err="failed to get container status \"d9444340de280aa97ab8f54e051d174e364348fc5a5e33ad0be488f2ce01cda7\": rpc error: code = NotFound desc = could not find container \"d9444340de280aa97ab8f54e051d174e364348fc5a5e33ad0be488f2ce01cda7\": container with ID starting with d9444340de280aa97ab8f54e051d174e364348fc5a5e33ad0be488f2ce01cda7 not found: ID does not exist" Dec 01 09:36:27 crc kubenswrapper[4933]: I1201 09:36:27.675018 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.022510 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5tk2j"] Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.022940 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5tk2j" podUID="194f9dd3-85db-4303-ad0e-180d0e160da0" containerName="registry-server" containerID="cri-o://13043f0c841f4858206d8bb94cbb585ac1d5ef962e31458340d3e3e186f4562c" gracePeriod=30 Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.036706 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jlnld"] Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.037099 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jlnld" podUID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" containerName="registry-server" containerID="cri-o://a06650e0c6400dee16635ef2ef4d9951fe8df256bb514cab4ec7ab5c8fbdd08f" gracePeriod=30 Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.038205 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zj2bn"] Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.038734 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" podUID="51766f22-0ddf-4f2e-bbbd-059431d6ef4e" containerName="marketplace-operator" containerID="cri-o://4da124f76afb921651a0cc3d7b12bbda7c8b8019705c877125b1e8c462af52a8" gracePeriod=30 Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.060626 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngb5q"] Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.060951 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ngb5q" podUID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" containerName="registry-server" containerID="cri-o://7a55f0b45fc14128779cf55d2704208252d808b5ca2211f84c1bd95f7f05e565" gracePeriod=30 Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.067348 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b8gph"] Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.067687 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b8gph" podUID="c1f2e651-74da-4f9c-9294-c2d45830b676" containerName="registry-server" containerID="cri-o://a52375e49236b7456d0bd44a3f2c4c74f30915d50e0bb5fb5a13cea383ce4c5c" gracePeriod=30 Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.080341 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8w4rb"] Dec 01 09:36:28 crc kubenswrapper[4933]: E1201 09:36:28.080870 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.080887 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 09:36:28 crc kubenswrapper[4933]: E1201 09:36:28.080904 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" containerName="installer" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.080911 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" containerName="installer" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.081148 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="89266d5c-bcba-413a-9ab0-0901d25cd528" containerName="installer" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.081165 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.081639 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.092628 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8w4rb"] Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.171435 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-848jv\" (UniqueName: \"kubernetes.io/projected/972d2150-cea0-4c55-9be0-bc7022d630e2-kube-api-access-848jv\") pod \"marketplace-operator-79b997595-8w4rb\" (UID: \"972d2150-cea0-4c55-9be0-bc7022d630e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.171624 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/972d2150-cea0-4c55-9be0-bc7022d630e2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8w4rb\" (UID: \"972d2150-cea0-4c55-9be0-bc7022d630e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.171653 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/972d2150-cea0-4c55-9be0-bc7022d630e2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8w4rb\" (UID: \"972d2150-cea0-4c55-9be0-bc7022d630e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.273551 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-848jv\" (UniqueName: \"kubernetes.io/projected/972d2150-cea0-4c55-9be0-bc7022d630e2-kube-api-access-848jv\") pod \"marketplace-operator-79b997595-8w4rb\" (UID: \"972d2150-cea0-4c55-9be0-bc7022d630e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.273678 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/972d2150-cea0-4c55-9be0-bc7022d630e2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8w4rb\" (UID: \"972d2150-cea0-4c55-9be0-bc7022d630e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.273708 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/972d2150-cea0-4c55-9be0-bc7022d630e2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8w4rb\" (UID: \"972d2150-cea0-4c55-9be0-bc7022d630e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.277523 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/972d2150-cea0-4c55-9be0-bc7022d630e2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8w4rb\" (UID: \"972d2150-cea0-4c55-9be0-bc7022d630e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.280863 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/972d2150-cea0-4c55-9be0-bc7022d630e2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8w4rb\" (UID: \"972d2150-cea0-4c55-9be0-bc7022d630e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.292856 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-848jv\" (UniqueName: \"kubernetes.io/projected/972d2150-cea0-4c55-9be0-bc7022d630e2-kube-api-access-848jv\") pod \"marketplace-operator-79b997595-8w4rb\" (UID: \"972d2150-cea0-4c55-9be0-bc7022d630e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.400201 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.513910 4933 generic.go:334] "Generic (PLEG): container finished" podID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" containerID="a06650e0c6400dee16635ef2ef4d9951fe8df256bb514cab4ec7ab5c8fbdd08f" exitCode=0 Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.513990 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlnld" event={"ID":"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b","Type":"ContainerDied","Data":"a06650e0c6400dee16635ef2ef4d9951fe8df256bb514cab4ec7ab5c8fbdd08f"} Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.518609 4933 generic.go:334] "Generic (PLEG): container finished" podID="194f9dd3-85db-4303-ad0e-180d0e160da0" containerID="13043f0c841f4858206d8bb94cbb585ac1d5ef962e31458340d3e3e186f4562c" exitCode=0 Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.518699 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5tk2j" event={"ID":"194f9dd3-85db-4303-ad0e-180d0e160da0","Type":"ContainerDied","Data":"13043f0c841f4858206d8bb94cbb585ac1d5ef962e31458340d3e3e186f4562c"} Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.535093 4933 generic.go:334] "Generic (PLEG): container finished" podID="c1f2e651-74da-4f9c-9294-c2d45830b676" containerID="a52375e49236b7456d0bd44a3f2c4c74f30915d50e0bb5fb5a13cea383ce4c5c" exitCode=0 Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.535167 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8gph" event={"ID":"c1f2e651-74da-4f9c-9294-c2d45830b676","Type":"ContainerDied","Data":"a52375e49236b7456d0bd44a3f2c4c74f30915d50e0bb5fb5a13cea383ce4c5c"} Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.535213 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8gph" event={"ID":"c1f2e651-74da-4f9c-9294-c2d45830b676","Type":"ContainerDied","Data":"5d7bd0bef41b31e215b5852af435027a745e1d216ed6aa77c691f96dfcbed963"} Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.535227 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d7bd0bef41b31e215b5852af435027a745e1d216ed6aa77c691f96dfcbed963" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.541028 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.544339 4933 generic.go:334] "Generic (PLEG): container finished" podID="51766f22-0ddf-4f2e-bbbd-059431d6ef4e" containerID="4da124f76afb921651a0cc3d7b12bbda7c8b8019705c877125b1e8c462af52a8" exitCode=0 Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.544398 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" event={"ID":"51766f22-0ddf-4f2e-bbbd-059431d6ef4e","Type":"ContainerDied","Data":"4da124f76afb921651a0cc3d7b12bbda7c8b8019705c877125b1e8c462af52a8"} Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.544433 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" event={"ID":"51766f22-0ddf-4f2e-bbbd-059431d6ef4e","Type":"ContainerDied","Data":"77a81dc9ae151d0c3aa2e668a7bfc68b16dca58e6b2bd656822b91699ff537d8"} Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.544454 4933 scope.go:117] "RemoveContainer" containerID="4da124f76afb921651a0cc3d7b12bbda7c8b8019705c877125b1e8c462af52a8" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.548718 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.548784 4933 generic.go:334] "Generic (PLEG): container finished" podID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" containerID="7a55f0b45fc14128779cf55d2704208252d808b5ca2211f84c1bd95f7f05e565" exitCode=0 Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.548797 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngb5q" event={"ID":"a08a8024-ebc2-4e05-a6a0-ebc22bed8658","Type":"ContainerDied","Data":"7a55f0b45fc14128779cf55d2704208252d808b5ca2211f84c1bd95f7f05e565"} Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.548819 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngb5q" event={"ID":"a08a8024-ebc2-4e05-a6a0-ebc22bed8658","Type":"ContainerDied","Data":"c10022b0cde993bc5fcca9f7d4efbbeceac024c182ec3faeffc6d4a7a60550c7"} Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.548829 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c10022b0cde993bc5fcca9f7d4efbbeceac024c182ec3faeffc6d4a7a60550c7" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.550610 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.585032 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdttm\" (UniqueName: \"kubernetes.io/projected/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-kube-api-access-rdttm\") pod \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\" (UID: \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.585070 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1f2e651-74da-4f9c-9294-c2d45830b676-catalog-content\") pod \"c1f2e651-74da-4f9c-9294-c2d45830b676\" (UID: \"c1f2e651-74da-4f9c-9294-c2d45830b676\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.585147 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-catalog-content\") pod \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\" (UID: \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.585183 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmtsv\" (UniqueName: \"kubernetes.io/projected/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-kube-api-access-jmtsv\") pod \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\" (UID: \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.585217 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-marketplace-trusted-ca\") pod \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\" (UID: \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.585244 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-utilities\") pod \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\" (UID: \"a08a8024-ebc2-4e05-a6a0-ebc22bed8658\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.585314 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjx5q\" (UniqueName: \"kubernetes.io/projected/c1f2e651-74da-4f9c-9294-c2d45830b676-kube-api-access-sjx5q\") pod \"c1f2e651-74da-4f9c-9294-c2d45830b676\" (UID: \"c1f2e651-74da-4f9c-9294-c2d45830b676\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.585344 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-marketplace-operator-metrics\") pod \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\" (UID: \"51766f22-0ddf-4f2e-bbbd-059431d6ef4e\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.585367 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1f2e651-74da-4f9c-9294-c2d45830b676-utilities\") pod \"c1f2e651-74da-4f9c-9294-c2d45830b676\" (UID: \"c1f2e651-74da-4f9c-9294-c2d45830b676\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.585166 4933 scope.go:117] "RemoveContainer" containerID="4da124f76afb921651a0cc3d7b12bbda7c8b8019705c877125b1e8c462af52a8" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.587523 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1f2e651-74da-4f9c-9294-c2d45830b676-utilities" (OuterVolumeSpecName: "utilities") pod "c1f2e651-74da-4f9c-9294-c2d45830b676" (UID: "c1f2e651-74da-4f9c-9294-c2d45830b676"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.588016 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "51766f22-0ddf-4f2e-bbbd-059431d6ef4e" (UID: "51766f22-0ddf-4f2e-bbbd-059431d6ef4e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:36:28 crc kubenswrapper[4933]: E1201 09:36:28.596440 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4da124f76afb921651a0cc3d7b12bbda7c8b8019705c877125b1e8c462af52a8\": container with ID starting with 4da124f76afb921651a0cc3d7b12bbda7c8b8019705c877125b1e8c462af52a8 not found: ID does not exist" containerID="4da124f76afb921651a0cc3d7b12bbda7c8b8019705c877125b1e8c462af52a8" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.596507 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4da124f76afb921651a0cc3d7b12bbda7c8b8019705c877125b1e8c462af52a8"} err="failed to get container status \"4da124f76afb921651a0cc3d7b12bbda7c8b8019705c877125b1e8c462af52a8\": rpc error: code = NotFound desc = could not find container \"4da124f76afb921651a0cc3d7b12bbda7c8b8019705c877125b1e8c462af52a8\": container with ID starting with 4da124f76afb921651a0cc3d7b12bbda7c8b8019705c877125b1e8c462af52a8 not found: ID does not exist" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.597731 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-utilities" (OuterVolumeSpecName: "utilities") pod "a08a8024-ebc2-4e05-a6a0-ebc22bed8658" (UID: "a08a8024-ebc2-4e05-a6a0-ebc22bed8658"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.598154 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "51766f22-0ddf-4f2e-bbbd-059431d6ef4e" (UID: "51766f22-0ddf-4f2e-bbbd-059431d6ef4e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.599278 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-kube-api-access-jmtsv" (OuterVolumeSpecName: "kube-api-access-jmtsv") pod "51766f22-0ddf-4f2e-bbbd-059431d6ef4e" (UID: "51766f22-0ddf-4f2e-bbbd-059431d6ef4e"). InnerVolumeSpecName "kube-api-access-jmtsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.601139 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-kube-api-access-rdttm" (OuterVolumeSpecName: "kube-api-access-rdttm") pod "a08a8024-ebc2-4e05-a6a0-ebc22bed8658" (UID: "a08a8024-ebc2-4e05-a6a0-ebc22bed8658"). InnerVolumeSpecName "kube-api-access-rdttm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.602564 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f2e651-74da-4f9c-9294-c2d45830b676-kube-api-access-sjx5q" (OuterVolumeSpecName: "kube-api-access-sjx5q") pod "c1f2e651-74da-4f9c-9294-c2d45830b676" (UID: "c1f2e651-74da-4f9c-9294-c2d45830b676"). InnerVolumeSpecName "kube-api-access-sjx5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.603659 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8w4rb"] Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.610163 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a08a8024-ebc2-4e05-a6a0-ebc22bed8658" (UID: "a08a8024-ebc2-4e05-a6a0-ebc22bed8658"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.687058 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmtsv\" (UniqueName: \"kubernetes.io/projected/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-kube-api-access-jmtsv\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.687110 4933 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.687147 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.687159 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjx5q\" (UniqueName: \"kubernetes.io/projected/c1f2e651-74da-4f9c-9294-c2d45830b676-kube-api-access-sjx5q\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.687169 4933 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/51766f22-0ddf-4f2e-bbbd-059431d6ef4e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.687180 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1f2e651-74da-4f9c-9294-c2d45830b676-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.687190 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdttm\" (UniqueName: \"kubernetes.io/projected/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-kube-api-access-rdttm\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.687199 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08a8024-ebc2-4e05-a6a0-ebc22bed8658-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.723024 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1f2e651-74da-4f9c-9294-c2d45830b676-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1f2e651-74da-4f9c-9294-c2d45830b676" (UID: "c1f2e651-74da-4f9c-9294-c2d45830b676"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.788564 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1f2e651-74da-4f9c-9294-c2d45830b676-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.906268 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.947636 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.991753 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-catalog-content\") pod \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\" (UID: \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.991837 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhrfh\" (UniqueName: \"kubernetes.io/projected/194f9dd3-85db-4303-ad0e-180d0e160da0-kube-api-access-mhrfh\") pod \"194f9dd3-85db-4303-ad0e-180d0e160da0\" (UID: \"194f9dd3-85db-4303-ad0e-180d0e160da0\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.991870 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-utilities\") pod \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\" (UID: \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.991914 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194f9dd3-85db-4303-ad0e-180d0e160da0-utilities\") pod \"194f9dd3-85db-4303-ad0e-180d0e160da0\" (UID: \"194f9dd3-85db-4303-ad0e-180d0e160da0\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.991957 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194f9dd3-85db-4303-ad0e-180d0e160da0-catalog-content\") pod \"194f9dd3-85db-4303-ad0e-180d0e160da0\" (UID: \"194f9dd3-85db-4303-ad0e-180d0e160da0\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.992022 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6jsd\" (UniqueName: \"kubernetes.io/projected/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-kube-api-access-b6jsd\") pod \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\" (UID: \"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b\") " Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.993953 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/194f9dd3-85db-4303-ad0e-180d0e160da0-utilities" (OuterVolumeSpecName: "utilities") pod "194f9dd3-85db-4303-ad0e-180d0e160da0" (UID: "194f9dd3-85db-4303-ad0e-180d0e160da0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:36:28 crc kubenswrapper[4933]: I1201 09:36:28.995826 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/194f9dd3-85db-4303-ad0e-180d0e160da0-kube-api-access-mhrfh" (OuterVolumeSpecName: "kube-api-access-mhrfh") pod "194f9dd3-85db-4303-ad0e-180d0e160da0" (UID: "194f9dd3-85db-4303-ad0e-180d0e160da0"). InnerVolumeSpecName "kube-api-access-mhrfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.003929 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-utilities" (OuterVolumeSpecName: "utilities") pod "5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" (UID: "5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.014209 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-kube-api-access-b6jsd" (OuterVolumeSpecName: "kube-api-access-b6jsd") pod "5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" (UID: "5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b"). InnerVolumeSpecName "kube-api-access-b6jsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.066438 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/194f9dd3-85db-4303-ad0e-180d0e160da0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "194f9dd3-85db-4303-ad0e-180d0e160da0" (UID: "194f9dd3-85db-4303-ad0e-180d0e160da0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.072430 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" (UID: "5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.094669 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhrfh\" (UniqueName: \"kubernetes.io/projected/194f9dd3-85db-4303-ad0e-180d0e160da0-kube-api-access-mhrfh\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.094711 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.094722 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194f9dd3-85db-4303-ad0e-180d0e160da0-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.094730 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194f9dd3-85db-4303-ad0e-180d0e160da0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.094741 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6jsd\" (UniqueName: \"kubernetes.io/projected/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-kube-api-access-b6jsd\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.094750 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.559905 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlnld" event={"ID":"5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b","Type":"ContainerDied","Data":"fbe35d73c32857d78690449b0c4ae03bf6a1fbee86ae7bfbc866af8734818a70"} Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.559999 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlnld" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.560318 4933 scope.go:117] "RemoveContainer" containerID="a06650e0c6400dee16635ef2ef4d9951fe8df256bb514cab4ec7ab5c8fbdd08f" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.564116 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5tk2j" event={"ID":"194f9dd3-85db-4303-ad0e-180d0e160da0","Type":"ContainerDied","Data":"b331849fd4912bc02f801efd3e1ed42ca1d4fb163f4b69428de8f9b9e6ef8b91"} Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.564971 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5tk2j" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.566074 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" event={"ID":"972d2150-cea0-4c55-9be0-bc7022d630e2","Type":"ContainerStarted","Data":"0de1ebc6cb9e764567997cc93d8bd010f7ce1aaecc2f0067ebaea1c7f71a8ad4"} Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.566195 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" event={"ID":"972d2150-cea0-4c55-9be0-bc7022d630e2","Type":"ContainerStarted","Data":"7f7567ef50a40237c5fdda6c49f12edfe035f14eea94f2686e698fb3cf92d1db"} Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.566623 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.567487 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zj2bn" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.567576 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8gph" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.567494 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngb5q" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.572014 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.577984 4933 scope.go:117] "RemoveContainer" containerID="fd216f784c8281309bc3581b4b956d1a6b4f5be0a13c449ef0eb24b037bed0a0" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.604581 4933 scope.go:117] "RemoveContainer" containerID="652ee23bd10b140c3198be615bab0552422ca4d67eed9fdd462dff08a6a4f836" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.606019 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8w4rb" podStartSLOduration=1.605959276 podStartE2EDuration="1.605959276s" podCreationTimestamp="2025-12-01 09:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:36:29.597875737 +0000 UTC m=+280.239599352" watchObservedRunningTime="2025-12-01 09:36:29.605959276 +0000 UTC m=+280.247682891" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.628963 4933 scope.go:117] "RemoveContainer" containerID="13043f0c841f4858206d8bb94cbb585ac1d5ef962e31458340d3e3e186f4562c" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.645403 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngb5q"] Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.649912 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngb5q"] Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.661670 4933 scope.go:117] "RemoveContainer" containerID="8cfa5361032f1b7da47612bd389b2f08fb0b2da04cfc7e9437b29b2ced6e55f1" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.663767 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zj2bn"] Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.690986 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" path="/var/lib/kubelet/pods/a08a8024-ebc2-4e05-a6a0-ebc22bed8658/volumes" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.695563 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zj2bn"] Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.695627 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5tk2j"] Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.695648 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5tk2j"] Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.695665 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b8gph"] Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.699648 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b8gph"] Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.701619 4933 scope.go:117] "RemoveContainer" containerID="981446bc66b3b20b29e680484a2d60bbbe1f91523a43ddc2790ee63826229ed5" Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.707372 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jlnld"] Dec 01 09:36:29 crc kubenswrapper[4933]: I1201 09:36:29.711813 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jlnld"] Dec 01 09:36:31 crc kubenswrapper[4933]: I1201 09:36:31.675143 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="194f9dd3-85db-4303-ad0e-180d0e160da0" path="/var/lib/kubelet/pods/194f9dd3-85db-4303-ad0e-180d0e160da0/volumes" Dec 01 09:36:31 crc kubenswrapper[4933]: I1201 09:36:31.677186 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51766f22-0ddf-4f2e-bbbd-059431d6ef4e" path="/var/lib/kubelet/pods/51766f22-0ddf-4f2e-bbbd-059431d6ef4e/volumes" Dec 01 09:36:31 crc kubenswrapper[4933]: I1201 09:36:31.677832 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" path="/var/lib/kubelet/pods/5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b/volumes" Dec 01 09:36:31 crc kubenswrapper[4933]: I1201 09:36:31.678614 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f2e651-74da-4f9c-9294-c2d45830b676" path="/var/lib/kubelet/pods/c1f2e651-74da-4f9c-9294-c2d45830b676/volumes" Dec 01 09:36:33 crc kubenswrapper[4933]: I1201 09:36:33.938941 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 01 09:36:58 crc kubenswrapper[4933]: I1201 09:36:58.847075 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v9qqn"] Dec 01 09:36:58 crc kubenswrapper[4933]: I1201 09:36:58.847931 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" podUID="da8f0888-39cd-4813-8f5b-ba725fb15ee5" containerName="controller-manager" containerID="cri-o://8ec58aba24a5ae8a5de56f4e433a69795807b013ff523c443b7604f9e100fcfe" gracePeriod=30 Dec 01 09:36:58 crc kubenswrapper[4933]: I1201 09:36:58.949285 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs"] Dec 01 09:36:58 crc kubenswrapper[4933]: I1201 09:36:58.950059 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" podUID="54960e89-4e49-4c21-bea4-cc46fcf8edba" containerName="route-controller-manager" containerID="cri-o://1559601dd6befb3002cb2dbdbe9dd92056f311f9b557a5befdb5fc36180fbfa4" gracePeriod=30 Dec 01 09:36:59 crc kubenswrapper[4933]: I1201 09:36:59.744670 4933 generic.go:334] "Generic (PLEG): container finished" podID="da8f0888-39cd-4813-8f5b-ba725fb15ee5" containerID="8ec58aba24a5ae8a5de56f4e433a69795807b013ff523c443b7604f9e100fcfe" exitCode=0 Dec 01 09:36:59 crc kubenswrapper[4933]: I1201 09:36:59.744804 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" event={"ID":"da8f0888-39cd-4813-8f5b-ba725fb15ee5","Type":"ContainerDied","Data":"8ec58aba24a5ae8a5de56f4e433a69795807b013ff523c443b7604f9e100fcfe"} Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.392143 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.432099 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-proxy-ca-bundles\") pod \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.432169 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8f0888-39cd-4813-8f5b-ba725fb15ee5-serving-cert\") pod \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.432426 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-config\") pod \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.432469 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmk2b\" (UniqueName: \"kubernetes.io/projected/da8f0888-39cd-4813-8f5b-ba725fb15ee5-kube-api-access-kmk2b\") pod \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.432531 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-client-ca\") pod \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\" (UID: \"da8f0888-39cd-4813-8f5b-ba725fb15ee5\") " Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.433670 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "da8f0888-39cd-4813-8f5b-ba725fb15ee5" (UID: "da8f0888-39cd-4813-8f5b-ba725fb15ee5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.433748 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-config" (OuterVolumeSpecName: "config") pod "da8f0888-39cd-4813-8f5b-ba725fb15ee5" (UID: "da8f0888-39cd-4813-8f5b-ba725fb15ee5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.433837 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-client-ca" (OuterVolumeSpecName: "client-ca") pod "da8f0888-39cd-4813-8f5b-ba725fb15ee5" (UID: "da8f0888-39cd-4813-8f5b-ba725fb15ee5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.445841 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8f0888-39cd-4813-8f5b-ba725fb15ee5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "da8f0888-39cd-4813-8f5b-ba725fb15ee5" (UID: "da8f0888-39cd-4813-8f5b-ba725fb15ee5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.447428 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8f0888-39cd-4813-8f5b-ba725fb15ee5-kube-api-access-kmk2b" (OuterVolumeSpecName: "kube-api-access-kmk2b") pod "da8f0888-39cd-4813-8f5b-ba725fb15ee5" (UID: "da8f0888-39cd-4813-8f5b-ba725fb15ee5"). InnerVolumeSpecName "kube-api-access-kmk2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.532266 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.533601 4933 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.533632 4933 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.533643 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8f0888-39cd-4813-8f5b-ba725fb15ee5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.533653 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8f0888-39cd-4813-8f5b-ba725fb15ee5-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.533662 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmk2b\" (UniqueName: \"kubernetes.io/projected/da8f0888-39cd-4813-8f5b-ba725fb15ee5-kube-api-access-kmk2b\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.635255 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54960e89-4e49-4c21-bea4-cc46fcf8edba-client-ca\") pod \"54960e89-4e49-4c21-bea4-cc46fcf8edba\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.635469 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjpbt\" (UniqueName: \"kubernetes.io/projected/54960e89-4e49-4c21-bea4-cc46fcf8edba-kube-api-access-tjpbt\") pod \"54960e89-4e49-4c21-bea4-cc46fcf8edba\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.635531 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54960e89-4e49-4c21-bea4-cc46fcf8edba-config\") pod \"54960e89-4e49-4c21-bea4-cc46fcf8edba\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.635577 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54960e89-4e49-4c21-bea4-cc46fcf8edba-serving-cert\") pod \"54960e89-4e49-4c21-bea4-cc46fcf8edba\" (UID: \"54960e89-4e49-4c21-bea4-cc46fcf8edba\") " Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.636740 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54960e89-4e49-4c21-bea4-cc46fcf8edba-client-ca" (OuterVolumeSpecName: "client-ca") pod "54960e89-4e49-4c21-bea4-cc46fcf8edba" (UID: "54960e89-4e49-4c21-bea4-cc46fcf8edba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.637032 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54960e89-4e49-4c21-bea4-cc46fcf8edba-config" (OuterVolumeSpecName: "config") pod "54960e89-4e49-4c21-bea4-cc46fcf8edba" (UID: "54960e89-4e49-4c21-bea4-cc46fcf8edba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.640050 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54960e89-4e49-4c21-bea4-cc46fcf8edba-kube-api-access-tjpbt" (OuterVolumeSpecName: "kube-api-access-tjpbt") pod "54960e89-4e49-4c21-bea4-cc46fcf8edba" (UID: "54960e89-4e49-4c21-bea4-cc46fcf8edba"). InnerVolumeSpecName "kube-api-access-tjpbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.640375 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54960e89-4e49-4c21-bea4-cc46fcf8edba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "54960e89-4e49-4c21-bea4-cc46fcf8edba" (UID: "54960e89-4e49-4c21-bea4-cc46fcf8edba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.736848 4933 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54960e89-4e49-4c21-bea4-cc46fcf8edba-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.736885 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjpbt\" (UniqueName: \"kubernetes.io/projected/54960e89-4e49-4c21-bea4-cc46fcf8edba-kube-api-access-tjpbt\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.736900 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54960e89-4e49-4c21-bea4-cc46fcf8edba-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.736908 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54960e89-4e49-4c21-bea4-cc46fcf8edba-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.751772 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.751778 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v9qqn" event={"ID":"da8f0888-39cd-4813-8f5b-ba725fb15ee5","Type":"ContainerDied","Data":"f7571434d519bef1247f50af03322b4d116b15f32e17bccf6b84003c5cf155bc"} Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.751991 4933 scope.go:117] "RemoveContainer" containerID="8ec58aba24a5ae8a5de56f4e433a69795807b013ff523c443b7604f9e100fcfe" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.753376 4933 generic.go:334] "Generic (PLEG): container finished" podID="54960e89-4e49-4c21-bea4-cc46fcf8edba" containerID="1559601dd6befb3002cb2dbdbe9dd92056f311f9b557a5befdb5fc36180fbfa4" exitCode=0 Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.753434 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.753449 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" event={"ID":"54960e89-4e49-4c21-bea4-cc46fcf8edba","Type":"ContainerDied","Data":"1559601dd6befb3002cb2dbdbe9dd92056f311f9b557a5befdb5fc36180fbfa4"} Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.753493 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs" event={"ID":"54960e89-4e49-4c21-bea4-cc46fcf8edba","Type":"ContainerDied","Data":"5f548bef4662c0b0b936093c9136f583950edcc8d1ca2902794c964b1fcc97e9"} Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.777977 4933 scope.go:117] "RemoveContainer" containerID="1559601dd6befb3002cb2dbdbe9dd92056f311f9b557a5befdb5fc36180fbfa4" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.791461 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs"] Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.795515 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sr8zs"] Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.796167 4933 scope.go:117] "RemoveContainer" containerID="1559601dd6befb3002cb2dbdbe9dd92056f311f9b557a5befdb5fc36180fbfa4" Dec 01 09:37:00 crc kubenswrapper[4933]: E1201 09:37:00.796967 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1559601dd6befb3002cb2dbdbe9dd92056f311f9b557a5befdb5fc36180fbfa4\": container with ID starting with 1559601dd6befb3002cb2dbdbe9dd92056f311f9b557a5befdb5fc36180fbfa4 not found: ID does not exist" containerID="1559601dd6befb3002cb2dbdbe9dd92056f311f9b557a5befdb5fc36180fbfa4" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.797026 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1559601dd6befb3002cb2dbdbe9dd92056f311f9b557a5befdb5fc36180fbfa4"} err="failed to get container status \"1559601dd6befb3002cb2dbdbe9dd92056f311f9b557a5befdb5fc36180fbfa4\": rpc error: code = NotFound desc = could not find container \"1559601dd6befb3002cb2dbdbe9dd92056f311f9b557a5befdb5fc36180fbfa4\": container with ID starting with 1559601dd6befb3002cb2dbdbe9dd92056f311f9b557a5befdb5fc36180fbfa4 not found: ID does not exist" Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.804898 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v9qqn"] Dec 01 09:37:00 crc kubenswrapper[4933]: I1201 09:37:00.808651 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v9qqn"] Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.488968 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58"] Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489374 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194f9dd3-85db-4303-ad0e-180d0e160da0" containerName="registry-server" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489392 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="194f9dd3-85db-4303-ad0e-180d0e160da0" containerName="registry-server" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489406 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" containerName="registry-server" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489414 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" containerName="registry-server" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489425 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8f0888-39cd-4813-8f5b-ba725fb15ee5" containerName="controller-manager" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489433 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8f0888-39cd-4813-8f5b-ba725fb15ee5" containerName="controller-manager" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489442 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" containerName="extract-utilities" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489448 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" containerName="extract-utilities" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489458 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194f9dd3-85db-4303-ad0e-180d0e160da0" containerName="extract-content" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489468 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="194f9dd3-85db-4303-ad0e-180d0e160da0" containerName="extract-content" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489480 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" containerName="extract-content" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489486 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" containerName="extract-content" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489493 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" containerName="registry-server" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489499 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" containerName="registry-server" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489510 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f2e651-74da-4f9c-9294-c2d45830b676" containerName="extract-content" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489517 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f2e651-74da-4f9c-9294-c2d45830b676" containerName="extract-content" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489525 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51766f22-0ddf-4f2e-bbbd-059431d6ef4e" containerName="marketplace-operator" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489532 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="51766f22-0ddf-4f2e-bbbd-059431d6ef4e" containerName="marketplace-operator" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489540 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" containerName="extract-utilities" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489546 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" containerName="extract-utilities" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489558 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54960e89-4e49-4c21-bea4-cc46fcf8edba" containerName="route-controller-manager" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489565 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="54960e89-4e49-4c21-bea4-cc46fcf8edba" containerName="route-controller-manager" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489573 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f2e651-74da-4f9c-9294-c2d45830b676" containerName="registry-server" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489579 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f2e651-74da-4f9c-9294-c2d45830b676" containerName="registry-server" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489588 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" containerName="extract-content" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489594 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" containerName="extract-content" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489601 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194f9dd3-85db-4303-ad0e-180d0e160da0" containerName="extract-utilities" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489608 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="194f9dd3-85db-4303-ad0e-180d0e160da0" containerName="extract-utilities" Dec 01 09:37:01 crc kubenswrapper[4933]: E1201 09:37:01.489620 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f2e651-74da-4f9c-9294-c2d45830b676" containerName="extract-utilities" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489626 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f2e651-74da-4f9c-9294-c2d45830b676" containerName="extract-utilities" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489727 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="a08a8024-ebc2-4e05-a6a0-ebc22bed8658" containerName="registry-server" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489742 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="51766f22-0ddf-4f2e-bbbd-059431d6ef4e" containerName="marketplace-operator" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489753 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="194f9dd3-85db-4303-ad0e-180d0e160da0" containerName="registry-server" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489763 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="54960e89-4e49-4c21-bea4-cc46fcf8edba" containerName="route-controller-manager" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489770 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de2c46e-8ecd-4bb3-b68e-9dfd7357c66b" containerName="registry-server" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489779 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f2e651-74da-4f9c-9294-c2d45830b676" containerName="registry-server" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.489788 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8f0888-39cd-4813-8f5b-ba725fb15ee5" containerName="controller-manager" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.490347 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.493091 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh"] Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.495784 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.498235 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.498630 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.498767 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.498837 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.498940 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.498971 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.499128 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.500714 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.500924 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.501158 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.501446 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.502670 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.506681 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58"] Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.509255 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.509960 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh"] Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.549057 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-client-ca\") pod \"route-controller-manager-75f7b94877-kh8bh\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.549185 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-config\") pod \"route-controller-manager-75f7b94877-kh8bh\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.549299 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zkvc\" (UniqueName: \"kubernetes.io/projected/fa4c6523-4be1-4749-be39-e9f960029709-kube-api-access-2zkvc\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.549377 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-serving-cert\") pod \"route-controller-manager-75f7b94877-kh8bh\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.549459 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-proxy-ca-bundles\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.549487 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa4c6523-4be1-4749-be39-e9f960029709-serving-cert\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.549509 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-client-ca\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.549584 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqb8n\" (UniqueName: \"kubernetes.io/projected/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-kube-api-access-zqb8n\") pod \"route-controller-manager-75f7b94877-kh8bh\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.549726 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-config\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.651106 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-client-ca\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.651182 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqb8n\" (UniqueName: \"kubernetes.io/projected/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-kube-api-access-zqb8n\") pod \"route-controller-manager-75f7b94877-kh8bh\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.651217 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-config\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.651248 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-client-ca\") pod \"route-controller-manager-75f7b94877-kh8bh\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.651279 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-config\") pod \"route-controller-manager-75f7b94877-kh8bh\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.651324 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zkvc\" (UniqueName: \"kubernetes.io/projected/fa4c6523-4be1-4749-be39-e9f960029709-kube-api-access-2zkvc\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.651343 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-serving-cert\") pod \"route-controller-manager-75f7b94877-kh8bh\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.651381 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-proxy-ca-bundles\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.651399 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa4c6523-4be1-4749-be39-e9f960029709-serving-cert\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.652562 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-client-ca\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.653168 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-config\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.653176 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-proxy-ca-bundles\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.653280 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-client-ca\") pod \"route-controller-manager-75f7b94877-kh8bh\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.653510 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-config\") pod \"route-controller-manager-75f7b94877-kh8bh\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.658081 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa4c6523-4be1-4749-be39-e9f960029709-serving-cert\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.658490 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-serving-cert\") pod \"route-controller-manager-75f7b94877-kh8bh\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.673188 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqb8n\" (UniqueName: \"kubernetes.io/projected/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-kube-api-access-zqb8n\") pod \"route-controller-manager-75f7b94877-kh8bh\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.685475 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zkvc\" (UniqueName: \"kubernetes.io/projected/fa4c6523-4be1-4749-be39-e9f960029709-kube-api-access-2zkvc\") pod \"controller-manager-5c8f4f45cc-wkf58\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.686036 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54960e89-4e49-4c21-bea4-cc46fcf8edba" path="/var/lib/kubelet/pods/54960e89-4e49-4c21-bea4-cc46fcf8edba/volumes" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.686860 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8f0888-39cd-4813-8f5b-ba725fb15ee5" path="/var/lib/kubelet/pods/da8f0888-39cd-4813-8f5b-ba725fb15ee5/volumes" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.821451 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:01 crc kubenswrapper[4933]: I1201 09:37:01.831111 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:02 crc kubenswrapper[4933]: I1201 09:37:02.074204 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh"] Dec 01 09:37:02 crc kubenswrapper[4933]: I1201 09:37:02.228626 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58"] Dec 01 09:37:02 crc kubenswrapper[4933]: W1201 09:37:02.237401 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa4c6523_4be1_4749_be39_e9f960029709.slice/crio-1059780b27e0c1cb5f33c3ef826cff42e7122f7df8886a36fd4e3100858eb4c3 WatchSource:0}: Error finding container 1059780b27e0c1cb5f33c3ef826cff42e7122f7df8886a36fd4e3100858eb4c3: Status 404 returned error can't find the container with id 1059780b27e0c1cb5f33c3ef826cff42e7122f7df8886a36fd4e3100858eb4c3 Dec 01 09:37:02 crc kubenswrapper[4933]: I1201 09:37:02.773884 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" event={"ID":"fa4c6523-4be1-4749-be39-e9f960029709","Type":"ContainerStarted","Data":"9e109fb7a95ad01d34913e9c1d1e7b6712b38af6b8cc8ef7641628cf3ac5fc7c"} Dec 01 09:37:02 crc kubenswrapper[4933]: I1201 09:37:02.773952 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" event={"ID":"fa4c6523-4be1-4749-be39-e9f960029709","Type":"ContainerStarted","Data":"1059780b27e0c1cb5f33c3ef826cff42e7122f7df8886a36fd4e3100858eb4c3"} Dec 01 09:37:02 crc kubenswrapper[4933]: I1201 09:37:02.774602 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:02 crc kubenswrapper[4933]: I1201 09:37:02.775897 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" event={"ID":"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2","Type":"ContainerStarted","Data":"7be99243f67f4fbe71f1f943188f4ce6069f572dadff108e96f5de9b1271b5a2"} Dec 01 09:37:02 crc kubenswrapper[4933]: I1201 09:37:02.775958 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" event={"ID":"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2","Type":"ContainerStarted","Data":"117fe4bdaa5cb71f9e7b76cec21795ce67299a18becf5093a8b0532583922e50"} Dec 01 09:37:02 crc kubenswrapper[4933]: I1201 09:37:02.776467 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:02 crc kubenswrapper[4933]: I1201 09:37:02.781489 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:02 crc kubenswrapper[4933]: I1201 09:37:02.782406 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:37:02 crc kubenswrapper[4933]: I1201 09:37:02.800093 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" podStartSLOduration=2.8000688240000002 podStartE2EDuration="2.800068824s" podCreationTimestamp="2025-12-01 09:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:37:02.796111387 +0000 UTC m=+313.437835002" watchObservedRunningTime="2025-12-01 09:37:02.800068824 +0000 UTC m=+313.441792429" Dec 01 09:37:02 crc kubenswrapper[4933]: I1201 09:37:02.866454 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" podStartSLOduration=2.866407772 podStartE2EDuration="2.866407772s" podCreationTimestamp="2025-12-01 09:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:37:02.862146768 +0000 UTC m=+313.503870383" watchObservedRunningTime="2025-12-01 09:37:02.866407772 +0000 UTC m=+313.508131387" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.053415 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dlplx"] Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.056189 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.058743 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.066529 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlplx"] Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.120437 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r22tb\" (UniqueName: \"kubernetes.io/projected/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-kube-api-access-r22tb\") pod \"community-operators-dlplx\" (UID: \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\") " pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.120495 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-utilities\") pod \"community-operators-dlplx\" (UID: \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\") " pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.120711 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-catalog-content\") pod \"community-operators-dlplx\" (UID: \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\") " pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.222169 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-catalog-content\") pod \"community-operators-dlplx\" (UID: \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\") " pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.222244 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r22tb\" (UniqueName: \"kubernetes.io/projected/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-kube-api-access-r22tb\") pod \"community-operators-dlplx\" (UID: \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\") " pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.222279 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-utilities\") pod \"community-operators-dlplx\" (UID: \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\") " pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.222888 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-catalog-content\") pod \"community-operators-dlplx\" (UID: \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\") " pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.222958 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-utilities\") pod \"community-operators-dlplx\" (UID: \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\") " pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.247252 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r22tb\" (UniqueName: \"kubernetes.io/projected/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-kube-api-access-r22tb\") pod \"community-operators-dlplx\" (UID: \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\") " pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.254976 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4hfd4"] Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.256232 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.259774 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.268825 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4hfd4"] Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.323749 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf2bc\" (UniqueName: \"kubernetes.io/projected/e1c51de3-51f6-4fb3-9800-fb97313a6212-kube-api-access-lf2bc\") pod \"certified-operators-4hfd4\" (UID: \"e1c51de3-51f6-4fb3-9800-fb97313a6212\") " pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.323837 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c51de3-51f6-4fb3-9800-fb97313a6212-utilities\") pod \"certified-operators-4hfd4\" (UID: \"e1c51de3-51f6-4fb3-9800-fb97313a6212\") " pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.323879 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c51de3-51f6-4fb3-9800-fb97313a6212-catalog-content\") pod \"certified-operators-4hfd4\" (UID: \"e1c51de3-51f6-4fb3-9800-fb97313a6212\") " pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.381212 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.425533 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf2bc\" (UniqueName: \"kubernetes.io/projected/e1c51de3-51f6-4fb3-9800-fb97313a6212-kube-api-access-lf2bc\") pod \"certified-operators-4hfd4\" (UID: \"e1c51de3-51f6-4fb3-9800-fb97313a6212\") " pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.425587 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c51de3-51f6-4fb3-9800-fb97313a6212-utilities\") pod \"certified-operators-4hfd4\" (UID: \"e1c51de3-51f6-4fb3-9800-fb97313a6212\") " pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.425624 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c51de3-51f6-4fb3-9800-fb97313a6212-catalog-content\") pod \"certified-operators-4hfd4\" (UID: \"e1c51de3-51f6-4fb3-9800-fb97313a6212\") " pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.426830 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c51de3-51f6-4fb3-9800-fb97313a6212-utilities\") pod \"certified-operators-4hfd4\" (UID: \"e1c51de3-51f6-4fb3-9800-fb97313a6212\") " pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.426873 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c51de3-51f6-4fb3-9800-fb97313a6212-catalog-content\") pod \"certified-operators-4hfd4\" (UID: \"e1c51de3-51f6-4fb3-9800-fb97313a6212\") " pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.446001 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf2bc\" (UniqueName: \"kubernetes.io/projected/e1c51de3-51f6-4fb3-9800-fb97313a6212-kube-api-access-lf2bc\") pod \"certified-operators-4hfd4\" (UID: \"e1c51de3-51f6-4fb3-9800-fb97313a6212\") " pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.580639 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.805391 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlplx"] Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.923859 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlplx" event={"ID":"0bf9bb02-5235-4319-9c5d-44b8228fd6eb","Type":"ContainerStarted","Data":"77bcc6b057d4bac935c85655dccf67b454381648a3613e3caf266c1bf5122a43"} Dec 01 09:37:28 crc kubenswrapper[4933]: I1201 09:37:28.996530 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4hfd4"] Dec 01 09:37:29 crc kubenswrapper[4933]: W1201 09:37:29.002038 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1c51de3_51f6_4fb3_9800_fb97313a6212.slice/crio-ec80678ea4f2d355c86f31ac35bfcbef4b25020e2e71a98117a8ec146c9d32d0 WatchSource:0}: Error finding container ec80678ea4f2d355c86f31ac35bfcbef4b25020e2e71a98117a8ec146c9d32d0: Status 404 returned error can't find the container with id ec80678ea4f2d355c86f31ac35bfcbef4b25020e2e71a98117a8ec146c9d32d0 Dec 01 09:37:29 crc kubenswrapper[4933]: I1201 09:37:29.930327 4933 generic.go:334] "Generic (PLEG): container finished" podID="e1c51de3-51f6-4fb3-9800-fb97313a6212" containerID="e6aed2d9daf046d07aafca2ecd3133848a0f6e66fc573193aeebe759fec4277d" exitCode=0 Dec 01 09:37:29 crc kubenswrapper[4933]: I1201 09:37:29.930434 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hfd4" event={"ID":"e1c51de3-51f6-4fb3-9800-fb97313a6212","Type":"ContainerDied","Data":"e6aed2d9daf046d07aafca2ecd3133848a0f6e66fc573193aeebe759fec4277d"} Dec 01 09:37:29 crc kubenswrapper[4933]: I1201 09:37:29.930485 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hfd4" event={"ID":"e1c51de3-51f6-4fb3-9800-fb97313a6212","Type":"ContainerStarted","Data":"ec80678ea4f2d355c86f31ac35bfcbef4b25020e2e71a98117a8ec146c9d32d0"} Dec 01 09:37:29 crc kubenswrapper[4933]: I1201 09:37:29.932992 4933 generic.go:334] "Generic (PLEG): container finished" podID="0bf9bb02-5235-4319-9c5d-44b8228fd6eb" containerID="ce63f5ad92406a26875530cd4c1f6510ee9b38f8f16af89e1e35ead8f118b3d4" exitCode=0 Dec 01 09:37:29 crc kubenswrapper[4933]: I1201 09:37:29.933026 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlplx" event={"ID":"0bf9bb02-5235-4319-9c5d-44b8228fd6eb","Type":"ContainerDied","Data":"ce63f5ad92406a26875530cd4c1f6510ee9b38f8f16af89e1e35ead8f118b3d4"} Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.452297 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cvjll"] Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.453599 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.457550 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.469280 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvjll"] Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.555203 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/879fdd94-31c3-4e2b-b47e-291738616c68-utilities\") pod \"redhat-marketplace-cvjll\" (UID: \"879fdd94-31c3-4e2b-b47e-291738616c68\") " pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.555273 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/879fdd94-31c3-4e2b-b47e-291738616c68-catalog-content\") pod \"redhat-marketplace-cvjll\" (UID: \"879fdd94-31c3-4e2b-b47e-291738616c68\") " pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.555319 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d64lt\" (UniqueName: \"kubernetes.io/projected/879fdd94-31c3-4e2b-b47e-291738616c68-kube-api-access-d64lt\") pod \"redhat-marketplace-cvjll\" (UID: \"879fdd94-31c3-4e2b-b47e-291738616c68\") " pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.651375 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-959q8"] Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.652927 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.656197 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.656754 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/879fdd94-31c3-4e2b-b47e-291738616c68-utilities\") pod \"redhat-marketplace-cvjll\" (UID: \"879fdd94-31c3-4e2b-b47e-291738616c68\") " pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.656827 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/879fdd94-31c3-4e2b-b47e-291738616c68-catalog-content\") pod \"redhat-marketplace-cvjll\" (UID: \"879fdd94-31c3-4e2b-b47e-291738616c68\") " pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.656869 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d64lt\" (UniqueName: \"kubernetes.io/projected/879fdd94-31c3-4e2b-b47e-291738616c68-kube-api-access-d64lt\") pod \"redhat-marketplace-cvjll\" (UID: \"879fdd94-31c3-4e2b-b47e-291738616c68\") " pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.657630 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/879fdd94-31c3-4e2b-b47e-291738616c68-utilities\") pod \"redhat-marketplace-cvjll\" (UID: \"879fdd94-31c3-4e2b-b47e-291738616c68\") " pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.657815 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/879fdd94-31c3-4e2b-b47e-291738616c68-catalog-content\") pod \"redhat-marketplace-cvjll\" (UID: \"879fdd94-31c3-4e2b-b47e-291738616c68\") " pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.665759 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-959q8"] Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.686398 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d64lt\" (UniqueName: \"kubernetes.io/projected/879fdd94-31c3-4e2b-b47e-291738616c68-kube-api-access-d64lt\") pod \"redhat-marketplace-cvjll\" (UID: \"879fdd94-31c3-4e2b-b47e-291738616c68\") " pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.757518 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8590b477-1f35-4afa-84b9-e96cb2c21535-utilities\") pod \"redhat-operators-959q8\" (UID: \"8590b477-1f35-4afa-84b9-e96cb2c21535\") " pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.757613 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9nhn\" (UniqueName: \"kubernetes.io/projected/8590b477-1f35-4afa-84b9-e96cb2c21535-kube-api-access-d9nhn\") pod \"redhat-operators-959q8\" (UID: \"8590b477-1f35-4afa-84b9-e96cb2c21535\") " pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.757661 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8590b477-1f35-4afa-84b9-e96cb2c21535-catalog-content\") pod \"redhat-operators-959q8\" (UID: \"8590b477-1f35-4afa-84b9-e96cb2c21535\") " pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.773700 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.859727 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9nhn\" (UniqueName: \"kubernetes.io/projected/8590b477-1f35-4afa-84b9-e96cb2c21535-kube-api-access-d9nhn\") pod \"redhat-operators-959q8\" (UID: \"8590b477-1f35-4afa-84b9-e96cb2c21535\") " pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.859812 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8590b477-1f35-4afa-84b9-e96cb2c21535-catalog-content\") pod \"redhat-operators-959q8\" (UID: \"8590b477-1f35-4afa-84b9-e96cb2c21535\") " pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.859878 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8590b477-1f35-4afa-84b9-e96cb2c21535-utilities\") pod \"redhat-operators-959q8\" (UID: \"8590b477-1f35-4afa-84b9-e96cb2c21535\") " pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.860623 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8590b477-1f35-4afa-84b9-e96cb2c21535-utilities\") pod \"redhat-operators-959q8\" (UID: \"8590b477-1f35-4afa-84b9-e96cb2c21535\") " pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.860732 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8590b477-1f35-4afa-84b9-e96cb2c21535-catalog-content\") pod \"redhat-operators-959q8\" (UID: \"8590b477-1f35-4afa-84b9-e96cb2c21535\") " pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.879009 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9nhn\" (UniqueName: \"kubernetes.io/projected/8590b477-1f35-4afa-84b9-e96cb2c21535-kube-api-access-d9nhn\") pod \"redhat-operators-959q8\" (UID: \"8590b477-1f35-4afa-84b9-e96cb2c21535\") " pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:30 crc kubenswrapper[4933]: I1201 09:37:30.970151 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:31 crc kubenswrapper[4933]: I1201 09:37:31.182862 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvjll"] Dec 01 09:37:31 crc kubenswrapper[4933]: W1201 09:37:31.192819 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod879fdd94_31c3_4e2b_b47e_291738616c68.slice/crio-58295951d635d16c072791e5381a1a42419ab0d5d776aa5d0c555efe2ccb594a WatchSource:0}: Error finding container 58295951d635d16c072791e5381a1a42419ab0d5d776aa5d0c555efe2ccb594a: Status 404 returned error can't find the container with id 58295951d635d16c072791e5381a1a42419ab0d5d776aa5d0c555efe2ccb594a Dec 01 09:37:31 crc kubenswrapper[4933]: I1201 09:37:31.401233 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-959q8"] Dec 01 09:37:31 crc kubenswrapper[4933]: W1201 09:37:31.417588 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8590b477_1f35_4afa_84b9_e96cb2c21535.slice/crio-a4520e6618aaac6b7e064d1aae18a06f1ba781e8f7efc4c5af8f38c7817eb910 WatchSource:0}: Error finding container a4520e6618aaac6b7e064d1aae18a06f1ba781e8f7efc4c5af8f38c7817eb910: Status 404 returned error can't find the container with id a4520e6618aaac6b7e064d1aae18a06f1ba781e8f7efc4c5af8f38c7817eb910 Dec 01 09:37:31 crc kubenswrapper[4933]: I1201 09:37:31.947836 4933 generic.go:334] "Generic (PLEG): container finished" podID="8590b477-1f35-4afa-84b9-e96cb2c21535" containerID="71d2f7a52dda5ce47eaf914f01faac591ac43a2d3e6ece0af5bda0a1388dc23a" exitCode=0 Dec 01 09:37:31 crc kubenswrapper[4933]: I1201 09:37:31.947942 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959q8" event={"ID":"8590b477-1f35-4afa-84b9-e96cb2c21535","Type":"ContainerDied","Data":"71d2f7a52dda5ce47eaf914f01faac591ac43a2d3e6ece0af5bda0a1388dc23a"} Dec 01 09:37:31 crc kubenswrapper[4933]: I1201 09:37:31.947981 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959q8" event={"ID":"8590b477-1f35-4afa-84b9-e96cb2c21535","Type":"ContainerStarted","Data":"a4520e6618aaac6b7e064d1aae18a06f1ba781e8f7efc4c5af8f38c7817eb910"} Dec 01 09:37:31 crc kubenswrapper[4933]: I1201 09:37:31.951376 4933 generic.go:334] "Generic (PLEG): container finished" podID="e1c51de3-51f6-4fb3-9800-fb97313a6212" containerID="ceeae17c34d4ca93a266877c3079be3b692e4cd8e3c64600d946957984d1ce79" exitCode=0 Dec 01 09:37:31 crc kubenswrapper[4933]: I1201 09:37:31.951432 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hfd4" event={"ID":"e1c51de3-51f6-4fb3-9800-fb97313a6212","Type":"ContainerDied","Data":"ceeae17c34d4ca93a266877c3079be3b692e4cd8e3c64600d946957984d1ce79"} Dec 01 09:37:31 crc kubenswrapper[4933]: I1201 09:37:31.955625 4933 generic.go:334] "Generic (PLEG): container finished" podID="0bf9bb02-5235-4319-9c5d-44b8228fd6eb" containerID="5badca8253fe3d2d240a79e91992cea9a4451e3bef8560dd8bf00f69d91c96f0" exitCode=0 Dec 01 09:37:31 crc kubenswrapper[4933]: I1201 09:37:31.955746 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlplx" event={"ID":"0bf9bb02-5235-4319-9c5d-44b8228fd6eb","Type":"ContainerDied","Data":"5badca8253fe3d2d240a79e91992cea9a4451e3bef8560dd8bf00f69d91c96f0"} Dec 01 09:37:31 crc kubenswrapper[4933]: I1201 09:37:31.958249 4933 generic.go:334] "Generic (PLEG): container finished" podID="879fdd94-31c3-4e2b-b47e-291738616c68" containerID="cf4b0e7d555e765a192dd5a21c418808b3187dd278d659f05631ae9f70e14bdd" exitCode=0 Dec 01 09:37:31 crc kubenswrapper[4933]: I1201 09:37:31.958297 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvjll" event={"ID":"879fdd94-31c3-4e2b-b47e-291738616c68","Type":"ContainerDied","Data":"cf4b0e7d555e765a192dd5a21c418808b3187dd278d659f05631ae9f70e14bdd"} Dec 01 09:37:31 crc kubenswrapper[4933]: I1201 09:37:31.958370 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvjll" event={"ID":"879fdd94-31c3-4e2b-b47e-291738616c68","Type":"ContainerStarted","Data":"58295951d635d16c072791e5381a1a42419ab0d5d776aa5d0c555efe2ccb594a"} Dec 01 09:37:32 crc kubenswrapper[4933]: I1201 09:37:32.966452 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959q8" event={"ID":"8590b477-1f35-4afa-84b9-e96cb2c21535","Type":"ContainerStarted","Data":"9d08083e4aaa229994c48ba6fa61febabb756a1fbf7140bcacafbebb9d2df846"} Dec 01 09:37:32 crc kubenswrapper[4933]: I1201 09:37:32.968717 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hfd4" event={"ID":"e1c51de3-51f6-4fb3-9800-fb97313a6212","Type":"ContainerStarted","Data":"13a1333bbd1313d36a3f37260321e49e2a28b784ad95f8b0595e42bc09f4f678"} Dec 01 09:37:32 crc kubenswrapper[4933]: I1201 09:37:32.971874 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlplx" event={"ID":"0bf9bb02-5235-4319-9c5d-44b8228fd6eb","Type":"ContainerStarted","Data":"15acbeec46c7b1ed0ab6d37d2220689c3bc60ff82e37998fa89edcd2de5f0f33"} Dec 01 09:37:32 crc kubenswrapper[4933]: I1201 09:37:32.974158 4933 generic.go:334] "Generic (PLEG): container finished" podID="879fdd94-31c3-4e2b-b47e-291738616c68" containerID="65c734a55b59cc9c4647769430540aa627d82bff8b6f46fbb5a2e83b38687e17" exitCode=0 Dec 01 09:37:32 crc kubenswrapper[4933]: I1201 09:37:32.974223 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvjll" event={"ID":"879fdd94-31c3-4e2b-b47e-291738616c68","Type":"ContainerDied","Data":"65c734a55b59cc9c4647769430540aa627d82bff8b6f46fbb5a2e83b38687e17"} Dec 01 09:37:33 crc kubenswrapper[4933]: I1201 09:37:33.036456 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dlplx" podStartSLOduration=2.485052021 podStartE2EDuration="5.036435834s" podCreationTimestamp="2025-12-01 09:37:28 +0000 UTC" firstStartedPulling="2025-12-01 09:37:29.93424703 +0000 UTC m=+340.575970645" lastFinishedPulling="2025-12-01 09:37:32.485630843 +0000 UTC m=+343.127354458" observedRunningTime="2025-12-01 09:37:33.034988168 +0000 UTC m=+343.676711783" watchObservedRunningTime="2025-12-01 09:37:33.036435834 +0000 UTC m=+343.678159439" Dec 01 09:37:33 crc kubenswrapper[4933]: I1201 09:37:33.063963 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4hfd4" podStartSLOduration=2.417661657 podStartE2EDuration="5.063925569s" podCreationTimestamp="2025-12-01 09:37:28 +0000 UTC" firstStartedPulling="2025-12-01 09:37:29.931838732 +0000 UTC m=+340.573562347" lastFinishedPulling="2025-12-01 09:37:32.578102644 +0000 UTC m=+343.219826259" observedRunningTime="2025-12-01 09:37:33.061414847 +0000 UTC m=+343.703138472" watchObservedRunningTime="2025-12-01 09:37:33.063925569 +0000 UTC m=+343.705649184" Dec 01 09:37:33 crc kubenswrapper[4933]: I1201 09:37:33.982838 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvjll" event={"ID":"879fdd94-31c3-4e2b-b47e-291738616c68","Type":"ContainerStarted","Data":"781106506e88e2a8eb603c7e96dca3f52b6b8b4f46f558a31cd2b8f91a5908e1"} Dec 01 09:37:33 crc kubenswrapper[4933]: I1201 09:37:33.987821 4933 generic.go:334] "Generic (PLEG): container finished" podID="8590b477-1f35-4afa-84b9-e96cb2c21535" containerID="9d08083e4aaa229994c48ba6fa61febabb756a1fbf7140bcacafbebb9d2df846" exitCode=0 Dec 01 09:37:33 crc kubenswrapper[4933]: I1201 09:37:33.987915 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959q8" event={"ID":"8590b477-1f35-4afa-84b9-e96cb2c21535","Type":"ContainerDied","Data":"9d08083e4aaa229994c48ba6fa61febabb756a1fbf7140bcacafbebb9d2df846"} Dec 01 09:37:34 crc kubenswrapper[4933]: I1201 09:37:34.005235 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cvjll" podStartSLOduration=2.416914978 podStartE2EDuration="4.005213302s" podCreationTimestamp="2025-12-01 09:37:30 +0000 UTC" firstStartedPulling="2025-12-01 09:37:31.960850424 +0000 UTC m=+342.602574039" lastFinishedPulling="2025-12-01 09:37:33.549148748 +0000 UTC m=+344.190872363" observedRunningTime="2025-12-01 09:37:34.004494874 +0000 UTC m=+344.646218499" watchObservedRunningTime="2025-12-01 09:37:34.005213302 +0000 UTC m=+344.646936917" Dec 01 09:37:34 crc kubenswrapper[4933]: I1201 09:37:34.997363 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959q8" event={"ID":"8590b477-1f35-4afa-84b9-e96cb2c21535","Type":"ContainerStarted","Data":"b2ccebc27e3c0c649e2f8244e3a8567dac7011dc72516432a24e2fa654e0aa56"} Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.232502 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-959q8" podStartSLOduration=3.65225374 podStartE2EDuration="6.232478501s" podCreationTimestamp="2025-12-01 09:37:30 +0000 UTC" firstStartedPulling="2025-12-01 09:37:31.949502925 +0000 UTC m=+342.591226540" lastFinishedPulling="2025-12-01 09:37:34.529727686 +0000 UTC m=+345.171451301" observedRunningTime="2025-12-01 09:37:35.022375388 +0000 UTC m=+345.664098993" watchObservedRunningTime="2025-12-01 09:37:36.232478501 +0000 UTC m=+346.874202116" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.237605 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jqksb"] Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.238524 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.252428 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jqksb"] Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.338660 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.338729 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654tc\" (UniqueName: \"kubernetes.io/projected/0f5602ad-37dd-4d20-9c2e-895081eebe63-kube-api-access-654tc\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.338761 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f5602ad-37dd-4d20-9c2e-895081eebe63-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.338792 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f5602ad-37dd-4d20-9c2e-895081eebe63-registry-tls\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.338821 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f5602ad-37dd-4d20-9c2e-895081eebe63-bound-sa-token\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.338910 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f5602ad-37dd-4d20-9c2e-895081eebe63-trusted-ca\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.338947 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f5602ad-37dd-4d20-9c2e-895081eebe63-registry-certificates\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.338969 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f5602ad-37dd-4d20-9c2e-895081eebe63-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.359587 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.439979 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f5602ad-37dd-4d20-9c2e-895081eebe63-registry-tls\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.440035 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f5602ad-37dd-4d20-9c2e-895081eebe63-bound-sa-token\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.440068 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f5602ad-37dd-4d20-9c2e-895081eebe63-trusted-ca\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.440097 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f5602ad-37dd-4d20-9c2e-895081eebe63-registry-certificates\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.440114 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f5602ad-37dd-4d20-9c2e-895081eebe63-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.440159 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-654tc\" (UniqueName: \"kubernetes.io/projected/0f5602ad-37dd-4d20-9c2e-895081eebe63-kube-api-access-654tc\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.440182 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f5602ad-37dd-4d20-9c2e-895081eebe63-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.440805 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f5602ad-37dd-4d20-9c2e-895081eebe63-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.441963 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f5602ad-37dd-4d20-9c2e-895081eebe63-registry-certificates\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.442295 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f5602ad-37dd-4d20-9c2e-895081eebe63-trusted-ca\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.446748 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f5602ad-37dd-4d20-9c2e-895081eebe63-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.446824 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f5602ad-37dd-4d20-9c2e-895081eebe63-registry-tls\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.459665 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-654tc\" (UniqueName: \"kubernetes.io/projected/0f5602ad-37dd-4d20-9c2e-895081eebe63-kube-api-access-654tc\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.462861 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f5602ad-37dd-4d20-9c2e-895081eebe63-bound-sa-token\") pod \"image-registry-66df7c8f76-jqksb\" (UID: \"0f5602ad-37dd-4d20-9c2e-895081eebe63\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.553884 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:36 crc kubenswrapper[4933]: I1201 09:37:36.975429 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jqksb"] Dec 01 09:37:36 crc kubenswrapper[4933]: W1201 09:37:36.978147 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f5602ad_37dd_4d20_9c2e_895081eebe63.slice/crio-5b7b81b62502cddf1322649bf09241ea0da65ff88cd1cf504692090c44b54197 WatchSource:0}: Error finding container 5b7b81b62502cddf1322649bf09241ea0da65ff88cd1cf504692090c44b54197: Status 404 returned error can't find the container with id 5b7b81b62502cddf1322649bf09241ea0da65ff88cd1cf504692090c44b54197 Dec 01 09:37:37 crc kubenswrapper[4933]: I1201 09:37:37.009793 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" event={"ID":"0f5602ad-37dd-4d20-9c2e-895081eebe63","Type":"ContainerStarted","Data":"5b7b81b62502cddf1322649bf09241ea0da65ff88cd1cf504692090c44b54197"} Dec 01 09:37:38 crc kubenswrapper[4933]: I1201 09:37:38.016566 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" event={"ID":"0f5602ad-37dd-4d20-9c2e-895081eebe63","Type":"ContainerStarted","Data":"43b5df7ec74a78fe76c334600acd250d3c439b1554238b77053438e3d5cf9285"} Dec 01 09:37:38 crc kubenswrapper[4933]: I1201 09:37:38.016928 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:38 crc kubenswrapper[4933]: I1201 09:37:38.038934 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" podStartSLOduration=2.03891218 podStartE2EDuration="2.03891218s" podCreationTimestamp="2025-12-01 09:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:37:38.035496565 +0000 UTC m=+348.677220190" watchObservedRunningTime="2025-12-01 09:37:38.03891218 +0000 UTC m=+348.680635815" Dec 01 09:37:38 crc kubenswrapper[4933]: I1201 09:37:38.382334 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:38 crc kubenswrapper[4933]: I1201 09:37:38.382402 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:38 crc kubenswrapper[4933]: I1201 09:37:38.451344 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:38 crc kubenswrapper[4933]: I1201 09:37:38.581398 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:38 crc kubenswrapper[4933]: I1201 09:37:38.581483 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:38 crc kubenswrapper[4933]: I1201 09:37:38.623995 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:38 crc kubenswrapper[4933]: I1201 09:37:38.857798 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58"] Dec 01 09:37:38 crc kubenswrapper[4933]: I1201 09:37:38.858077 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" podUID="fa4c6523-4be1-4749-be39-e9f960029709" containerName="controller-manager" containerID="cri-o://9e109fb7a95ad01d34913e9c1d1e7b6712b38af6b8cc8ef7641628cf3ac5fc7c" gracePeriod=30 Dec 01 09:37:39 crc kubenswrapper[4933]: I1201 09:37:39.060572 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4hfd4" Dec 01 09:37:39 crc kubenswrapper[4933]: I1201 09:37:39.061517 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dlplx" Dec 01 09:37:40 crc kubenswrapper[4933]: I1201 09:37:40.774125 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:40 crc kubenswrapper[4933]: I1201 09:37:40.774666 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:40 crc kubenswrapper[4933]: I1201 09:37:40.815045 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:40 crc kubenswrapper[4933]: I1201 09:37:40.971437 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:40 crc kubenswrapper[4933]: I1201 09:37:40.971496 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:41 crc kubenswrapper[4933]: I1201 09:37:41.013292 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:41 crc kubenswrapper[4933]: I1201 09:37:41.068563 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-959q8" Dec 01 09:37:41 crc kubenswrapper[4933]: I1201 09:37:41.104465 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cvjll" Dec 01 09:37:41 crc kubenswrapper[4933]: I1201 09:37:41.741623 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:37:41 crc kubenswrapper[4933]: I1201 09:37:41.741727 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:37:41 crc kubenswrapper[4933]: I1201 09:37:41.822007 4933 patch_prober.go:28] interesting pod/controller-manager-5c8f4f45cc-wkf58 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Dec 01 09:37:41 crc kubenswrapper[4933]: I1201 09:37:41.822072 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" podUID="fa4c6523-4be1-4749-be39-e9f960029709" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.038725 4933 generic.go:334] "Generic (PLEG): container finished" podID="fa4c6523-4be1-4749-be39-e9f960029709" containerID="9e109fb7a95ad01d34913e9c1d1e7b6712b38af6b8cc8ef7641628cf3ac5fc7c" exitCode=0 Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.039604 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" event={"ID":"fa4c6523-4be1-4749-be39-e9f960029709","Type":"ContainerDied","Data":"9e109fb7a95ad01d34913e9c1d1e7b6712b38af6b8cc8ef7641628cf3ac5fc7c"} Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.100290 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.119822 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-config\") pod \"fa4c6523-4be1-4749-be39-e9f960029709\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.119912 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-proxy-ca-bundles\") pod \"fa4c6523-4be1-4749-be39-e9f960029709\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.119947 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa4c6523-4be1-4749-be39-e9f960029709-serving-cert\") pod \"fa4c6523-4be1-4749-be39-e9f960029709\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.120065 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zkvc\" (UniqueName: \"kubernetes.io/projected/fa4c6523-4be1-4749-be39-e9f960029709-kube-api-access-2zkvc\") pod \"fa4c6523-4be1-4749-be39-e9f960029709\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.120099 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-client-ca\") pod \"fa4c6523-4be1-4749-be39-e9f960029709\" (UID: \"fa4c6523-4be1-4749-be39-e9f960029709\") " Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.121495 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fa4c6523-4be1-4749-be39-e9f960029709" (UID: "fa4c6523-4be1-4749-be39-e9f960029709"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.121519 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-config" (OuterVolumeSpecName: "config") pod "fa4c6523-4be1-4749-be39-e9f960029709" (UID: "fa4c6523-4be1-4749-be39-e9f960029709"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.121769 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-client-ca" (OuterVolumeSpecName: "client-ca") pod "fa4c6523-4be1-4749-be39-e9f960029709" (UID: "fa4c6523-4be1-4749-be39-e9f960029709"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.128072 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4c6523-4be1-4749-be39-e9f960029709-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fa4c6523-4be1-4749-be39-e9f960029709" (UID: "fa4c6523-4be1-4749-be39-e9f960029709"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.128839 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4c6523-4be1-4749-be39-e9f960029709-kube-api-access-2zkvc" (OuterVolumeSpecName: "kube-api-access-2zkvc") pod "fa4c6523-4be1-4749-be39-e9f960029709" (UID: "fa4c6523-4be1-4749-be39-e9f960029709"). InnerVolumeSpecName "kube-api-access-2zkvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.161045 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv"] Dec 01 09:37:42 crc kubenswrapper[4933]: E1201 09:37:42.161508 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4c6523-4be1-4749-be39-e9f960029709" containerName="controller-manager" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.161536 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4c6523-4be1-4749-be39-e9f960029709" containerName="controller-manager" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.161673 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa4c6523-4be1-4749-be39-e9f960029709" containerName="controller-manager" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.162350 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.166212 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv"] Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.221632 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c78f4500-55c3-422d-91fc-fc08ceba9551-config\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.221683 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c78f4500-55c3-422d-91fc-fc08ceba9551-proxy-ca-bundles\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.221708 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c78f4500-55c3-422d-91fc-fc08ceba9551-serving-cert\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.221877 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrtvj\" (UniqueName: \"kubernetes.io/projected/c78f4500-55c3-422d-91fc-fc08ceba9551-kube-api-access-qrtvj\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.221956 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c78f4500-55c3-422d-91fc-fc08ceba9551-client-ca\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.222172 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zkvc\" (UniqueName: \"kubernetes.io/projected/fa4c6523-4be1-4749-be39-e9f960029709-kube-api-access-2zkvc\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.222197 4933 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.222215 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.222229 4933 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa4c6523-4be1-4749-be39-e9f960029709-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.222240 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa4c6523-4be1-4749-be39-e9f960029709-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.324000 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c78f4500-55c3-422d-91fc-fc08ceba9551-config\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.325786 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c78f4500-55c3-422d-91fc-fc08ceba9551-proxy-ca-bundles\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.325701 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c78f4500-55c3-422d-91fc-fc08ceba9551-config\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.326098 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c78f4500-55c3-422d-91fc-fc08ceba9551-serving-cert\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.326856 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrtvj\" (UniqueName: \"kubernetes.io/projected/c78f4500-55c3-422d-91fc-fc08ceba9551-kube-api-access-qrtvj\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.326906 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c78f4500-55c3-422d-91fc-fc08ceba9551-client-ca\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.327791 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c78f4500-55c3-422d-91fc-fc08ceba9551-client-ca\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.327848 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c78f4500-55c3-422d-91fc-fc08ceba9551-proxy-ca-bundles\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.332209 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c78f4500-55c3-422d-91fc-fc08ceba9551-serving-cert\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.348523 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrtvj\" (UniqueName: \"kubernetes.io/projected/c78f4500-55c3-422d-91fc-fc08ceba9551-kube-api-access-qrtvj\") pod \"controller-manager-57cb68cf8c-hhtwv\" (UID: \"c78f4500-55c3-422d-91fc-fc08ceba9551\") " pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.484902 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:42 crc kubenswrapper[4933]: I1201 09:37:42.804551 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv"] Dec 01 09:37:43 crc kubenswrapper[4933]: I1201 09:37:43.046272 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" event={"ID":"c78f4500-55c3-422d-91fc-fc08ceba9551","Type":"ContainerStarted","Data":"2f6afd86d22e915ac5c6b16e35b08a16e58f493de943c2bb1e352396082a34fc"} Dec 01 09:37:43 crc kubenswrapper[4933]: I1201 09:37:43.047884 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" event={"ID":"fa4c6523-4be1-4749-be39-e9f960029709","Type":"ContainerDied","Data":"1059780b27e0c1cb5f33c3ef826cff42e7122f7df8886a36fd4e3100858eb4c3"} Dec 01 09:37:43 crc kubenswrapper[4933]: I1201 09:37:43.047930 4933 scope.go:117] "RemoveContainer" containerID="9e109fb7a95ad01d34913e9c1d1e7b6712b38af6b8cc8ef7641628cf3ac5fc7c" Dec 01 09:37:43 crc kubenswrapper[4933]: I1201 09:37:43.048032 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58" Dec 01 09:37:43 crc kubenswrapper[4933]: I1201 09:37:43.098404 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58"] Dec 01 09:37:43 crc kubenswrapper[4933]: I1201 09:37:43.103268 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c8f4f45cc-wkf58"] Dec 01 09:37:43 crc kubenswrapper[4933]: I1201 09:37:43.674526 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa4c6523-4be1-4749-be39-e9f960029709" path="/var/lib/kubelet/pods/fa4c6523-4be1-4749-be39-e9f960029709/volumes" Dec 01 09:37:44 crc kubenswrapper[4933]: I1201 09:37:44.054670 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" event={"ID":"c78f4500-55c3-422d-91fc-fc08ceba9551","Type":"ContainerStarted","Data":"c5f5ce43abe2ec4cc6d5af9db5bcaa8ac026e540e28fdd266b7bb40ce08e9bd9"} Dec 01 09:37:44 crc kubenswrapper[4933]: I1201 09:37:44.055178 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:44 crc kubenswrapper[4933]: I1201 09:37:44.061163 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" Dec 01 09:37:44 crc kubenswrapper[4933]: I1201 09:37:44.086657 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57cb68cf8c-hhtwv" podStartSLOduration=6.08663337 podStartE2EDuration="6.08663337s" podCreationTimestamp="2025-12-01 09:37:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:37:44.086430916 +0000 UTC m=+354.728154541" watchObservedRunningTime="2025-12-01 09:37:44.08663337 +0000 UTC m=+354.728356995" Dec 01 09:37:56 crc kubenswrapper[4933]: I1201 09:37:56.560093 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jqksb" Dec 01 09:37:56 crc kubenswrapper[4933]: I1201 09:37:56.627503 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-znqzs"] Dec 01 09:38:11 crc kubenswrapper[4933]: I1201 09:38:11.741040 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:38:11 crc kubenswrapper[4933]: I1201 09:38:11.742105 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:38:18 crc kubenswrapper[4933]: I1201 09:38:18.877505 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh"] Dec 01 09:38:18 crc kubenswrapper[4933]: I1201 09:38:18.878647 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" podUID="8b5000d1-43e9-4fb9-ade4-81f02de8e6e2" containerName="route-controller-manager" containerID="cri-o://7be99243f67f4fbe71f1f943188f4ce6069f572dadff108e96f5de9b1271b5a2" gracePeriod=30 Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.263441 4933 generic.go:334] "Generic (PLEG): container finished" podID="8b5000d1-43e9-4fb9-ade4-81f02de8e6e2" containerID="7be99243f67f4fbe71f1f943188f4ce6069f572dadff108e96f5de9b1271b5a2" exitCode=0 Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.263565 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" event={"ID":"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2","Type":"ContainerDied","Data":"7be99243f67f4fbe71f1f943188f4ce6069f572dadff108e96f5de9b1271b5a2"} Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.352591 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.483694 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-client-ca\") pod \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.483848 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-config\") pod \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.483903 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-serving-cert\") pod \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.483985 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqb8n\" (UniqueName: \"kubernetes.io/projected/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-kube-api-access-zqb8n\") pod \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\" (UID: \"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2\") " Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.484658 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b5000d1-43e9-4fb9-ade4-81f02de8e6e2" (UID: "8b5000d1-43e9-4fb9-ade4-81f02de8e6e2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.484725 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-config" (OuterVolumeSpecName: "config") pod "8b5000d1-43e9-4fb9-ade4-81f02de8e6e2" (UID: "8b5000d1-43e9-4fb9-ade4-81f02de8e6e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.492748 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-kube-api-access-zqb8n" (OuterVolumeSpecName: "kube-api-access-zqb8n") pod "8b5000d1-43e9-4fb9-ade4-81f02de8e6e2" (UID: "8b5000d1-43e9-4fb9-ade4-81f02de8e6e2"). InnerVolumeSpecName "kube-api-access-zqb8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.499453 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b5000d1-43e9-4fb9-ade4-81f02de8e6e2" (UID: "8b5000d1-43e9-4fb9-ade4-81f02de8e6e2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.585689 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.585724 4933 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.585737 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqb8n\" (UniqueName: \"kubernetes.io/projected/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-kube-api-access-zqb8n\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:19 crc kubenswrapper[4933]: I1201 09:38:19.585747 4933 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.273489 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" event={"ID":"8b5000d1-43e9-4fb9-ade4-81f02de8e6e2","Type":"ContainerDied","Data":"117fe4bdaa5cb71f9e7b76cec21795ce67299a18becf5093a8b0532583922e50"} Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.273574 4933 scope.go:117] "RemoveContainer" containerID="7be99243f67f4fbe71f1f943188f4ce6069f572dadff108e96f5de9b1271b5a2" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.273609 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.300440 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh"] Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.304668 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f7b94877-kh8bh"] Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.540346 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7"] Dec 01 09:38:20 crc kubenswrapper[4933]: E1201 09:38:20.540890 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5000d1-43e9-4fb9-ade4-81f02de8e6e2" containerName="route-controller-manager" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.540959 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5000d1-43e9-4fb9-ade4-81f02de8e6e2" containerName="route-controller-manager" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.541140 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5000d1-43e9-4fb9-ade4-81f02de8e6e2" containerName="route-controller-manager" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.541668 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.546006 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.546695 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.547008 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.547027 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.547232 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.547392 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.557222 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7"] Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.702184 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64l9t\" (UniqueName: \"kubernetes.io/projected/79e8bde8-11a5-4036-a1b3-a348cbff0f04-kube-api-access-64l9t\") pod \"route-controller-manager-64ddbdb4f7-jplh7\" (UID: \"79e8bde8-11a5-4036-a1b3-a348cbff0f04\") " pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.702269 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e8bde8-11a5-4036-a1b3-a348cbff0f04-config\") pod \"route-controller-manager-64ddbdb4f7-jplh7\" (UID: \"79e8bde8-11a5-4036-a1b3-a348cbff0f04\") " pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.702326 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79e8bde8-11a5-4036-a1b3-a348cbff0f04-serving-cert\") pod \"route-controller-manager-64ddbdb4f7-jplh7\" (UID: \"79e8bde8-11a5-4036-a1b3-a348cbff0f04\") " pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.702358 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79e8bde8-11a5-4036-a1b3-a348cbff0f04-client-ca\") pod \"route-controller-manager-64ddbdb4f7-jplh7\" (UID: \"79e8bde8-11a5-4036-a1b3-a348cbff0f04\") " pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.804176 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64l9t\" (UniqueName: \"kubernetes.io/projected/79e8bde8-11a5-4036-a1b3-a348cbff0f04-kube-api-access-64l9t\") pod \"route-controller-manager-64ddbdb4f7-jplh7\" (UID: \"79e8bde8-11a5-4036-a1b3-a348cbff0f04\") " pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.804267 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e8bde8-11a5-4036-a1b3-a348cbff0f04-config\") pod \"route-controller-manager-64ddbdb4f7-jplh7\" (UID: \"79e8bde8-11a5-4036-a1b3-a348cbff0f04\") " pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.804320 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79e8bde8-11a5-4036-a1b3-a348cbff0f04-serving-cert\") pod \"route-controller-manager-64ddbdb4f7-jplh7\" (UID: \"79e8bde8-11a5-4036-a1b3-a348cbff0f04\") " pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.804341 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79e8bde8-11a5-4036-a1b3-a348cbff0f04-client-ca\") pod \"route-controller-manager-64ddbdb4f7-jplh7\" (UID: \"79e8bde8-11a5-4036-a1b3-a348cbff0f04\") " pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.805327 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79e8bde8-11a5-4036-a1b3-a348cbff0f04-client-ca\") pod \"route-controller-manager-64ddbdb4f7-jplh7\" (UID: \"79e8bde8-11a5-4036-a1b3-a348cbff0f04\") " pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.805606 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e8bde8-11a5-4036-a1b3-a348cbff0f04-config\") pod \"route-controller-manager-64ddbdb4f7-jplh7\" (UID: \"79e8bde8-11a5-4036-a1b3-a348cbff0f04\") " pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.811334 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79e8bde8-11a5-4036-a1b3-a348cbff0f04-serving-cert\") pod \"route-controller-manager-64ddbdb4f7-jplh7\" (UID: \"79e8bde8-11a5-4036-a1b3-a348cbff0f04\") " pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.827902 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64l9t\" (UniqueName: \"kubernetes.io/projected/79e8bde8-11a5-4036-a1b3-a348cbff0f04-kube-api-access-64l9t\") pod \"route-controller-manager-64ddbdb4f7-jplh7\" (UID: \"79e8bde8-11a5-4036-a1b3-a348cbff0f04\") " pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:20 crc kubenswrapper[4933]: I1201 09:38:20.859983 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:21 crc kubenswrapper[4933]: I1201 09:38:21.285053 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7"] Dec 01 09:38:21 crc kubenswrapper[4933]: I1201 09:38:21.669986 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" podUID="123185e0-6f42-4a97-8107-c1e8a91d0ea9" containerName="registry" containerID="cri-o://5c5a83f93c551897a0082d1d7aaf46e2b06f9d084ce46bb8f71431d99444558b" gracePeriod=30 Dec 01 09:38:21 crc kubenswrapper[4933]: I1201 09:38:21.676277 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b5000d1-43e9-4fb9-ade4-81f02de8e6e2" path="/var/lib/kubelet/pods/8b5000d1-43e9-4fb9-ade4-81f02de8e6e2/volumes" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.045385 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.123441 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/123185e0-6f42-4a97-8107-c1e8a91d0ea9-installation-pull-secrets\") pod \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.123989 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dcdv\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-kube-api-access-6dcdv\") pod \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.124026 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-bound-sa-token\") pod \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.124270 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.124368 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/123185e0-6f42-4a97-8107-c1e8a91d0ea9-ca-trust-extracted\") pod \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.124397 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-registry-tls\") pod \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.124446 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/123185e0-6f42-4a97-8107-c1e8a91d0ea9-registry-certificates\") pod \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.124474 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/123185e0-6f42-4a97-8107-c1e8a91d0ea9-trusted-ca\") pod \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\" (UID: \"123185e0-6f42-4a97-8107-c1e8a91d0ea9\") " Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.125336 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/123185e0-6f42-4a97-8107-c1e8a91d0ea9-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "123185e0-6f42-4a97-8107-c1e8a91d0ea9" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.125410 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/123185e0-6f42-4a97-8107-c1e8a91d0ea9-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "123185e0-6f42-4a97-8107-c1e8a91d0ea9" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.130739 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-kube-api-access-6dcdv" (OuterVolumeSpecName: "kube-api-access-6dcdv") pod "123185e0-6f42-4a97-8107-c1e8a91d0ea9" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9"). InnerVolumeSpecName "kube-api-access-6dcdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.130931 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123185e0-6f42-4a97-8107-c1e8a91d0ea9-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "123185e0-6f42-4a97-8107-c1e8a91d0ea9" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.137621 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "123185e0-6f42-4a97-8107-c1e8a91d0ea9" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.138565 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "123185e0-6f42-4a97-8107-c1e8a91d0ea9" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.138832 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "123185e0-6f42-4a97-8107-c1e8a91d0ea9" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.149133 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/123185e0-6f42-4a97-8107-c1e8a91d0ea9-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "123185e0-6f42-4a97-8107-c1e8a91d0ea9" (UID: "123185e0-6f42-4a97-8107-c1e8a91d0ea9"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.225786 4933 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/123185e0-6f42-4a97-8107-c1e8a91d0ea9-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.225835 4933 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.225844 4933 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/123185e0-6f42-4a97-8107-c1e8a91d0ea9-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.225856 4933 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/123185e0-6f42-4a97-8107-c1e8a91d0ea9-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.225866 4933 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/123185e0-6f42-4a97-8107-c1e8a91d0ea9-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.225875 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dcdv\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-kube-api-access-6dcdv\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.225883 4933 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/123185e0-6f42-4a97-8107-c1e8a91d0ea9-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.292897 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" event={"ID":"79e8bde8-11a5-4036-a1b3-a348cbff0f04","Type":"ContainerStarted","Data":"5b29aa7f4e37ff56e4e5a6a6d876e301878bd26b46dd62c8ce312c62dfce1036"} Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.293071 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" event={"ID":"79e8bde8-11a5-4036-a1b3-a348cbff0f04","Type":"ContainerStarted","Data":"def1b3940d1a3fb941f2e096bae78060b4a1627fbc0b8f11da396f6021e426ec"} Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.293152 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.295150 4933 generic.go:334] "Generic (PLEG): container finished" podID="123185e0-6f42-4a97-8107-c1e8a91d0ea9" containerID="5c5a83f93c551897a0082d1d7aaf46e2b06f9d084ce46bb8f71431d99444558b" exitCode=0 Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.295202 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" event={"ID":"123185e0-6f42-4a97-8107-c1e8a91d0ea9","Type":"ContainerDied","Data":"5c5a83f93c551897a0082d1d7aaf46e2b06f9d084ce46bb8f71431d99444558b"} Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.295243 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" event={"ID":"123185e0-6f42-4a97-8107-c1e8a91d0ea9","Type":"ContainerDied","Data":"a2650103232582969df238911602cf92bc241bedde968eb23e513d93835e723e"} Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.295264 4933 scope.go:117] "RemoveContainer" containerID="5c5a83f93c551897a0082d1d7aaf46e2b06f9d084ce46bb8f71431d99444558b" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.295340 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-znqzs" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.298803 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.311359 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64ddbdb4f7-jplh7" podStartSLOduration=4.311343411 podStartE2EDuration="4.311343411s" podCreationTimestamp="2025-12-01 09:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:38:22.310047899 +0000 UTC m=+392.951771524" watchObservedRunningTime="2025-12-01 09:38:22.311343411 +0000 UTC m=+392.953067026" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.315868 4933 scope.go:117] "RemoveContainer" containerID="5c5a83f93c551897a0082d1d7aaf46e2b06f9d084ce46bb8f71431d99444558b" Dec 01 09:38:22 crc kubenswrapper[4933]: E1201 09:38:22.316649 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c5a83f93c551897a0082d1d7aaf46e2b06f9d084ce46bb8f71431d99444558b\": container with ID starting with 5c5a83f93c551897a0082d1d7aaf46e2b06f9d084ce46bb8f71431d99444558b not found: ID does not exist" containerID="5c5a83f93c551897a0082d1d7aaf46e2b06f9d084ce46bb8f71431d99444558b" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.316713 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c5a83f93c551897a0082d1d7aaf46e2b06f9d084ce46bb8f71431d99444558b"} err="failed to get container status \"5c5a83f93c551897a0082d1d7aaf46e2b06f9d084ce46bb8f71431d99444558b\": rpc error: code = NotFound desc = could not find container \"5c5a83f93c551897a0082d1d7aaf46e2b06f9d084ce46bb8f71431d99444558b\": container with ID starting with 5c5a83f93c551897a0082d1d7aaf46e2b06f9d084ce46bb8f71431d99444558b not found: ID does not exist" Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.366121 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-znqzs"] Dec 01 09:38:22 crc kubenswrapper[4933]: I1201 09:38:22.370984 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-znqzs"] Dec 01 09:38:23 crc kubenswrapper[4933]: I1201 09:38:23.675578 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="123185e0-6f42-4a97-8107-c1e8a91d0ea9" path="/var/lib/kubelet/pods/123185e0-6f42-4a97-8107-c1e8a91d0ea9/volumes" Dec 01 09:38:41 crc kubenswrapper[4933]: I1201 09:38:41.740933 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:38:41 crc kubenswrapper[4933]: I1201 09:38:41.742262 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:38:41 crc kubenswrapper[4933]: I1201 09:38:41.742351 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:38:41 crc kubenswrapper[4933]: I1201 09:38:41.743188 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"530143e76725bdde118c91d28d335ba05105652665eaa497ed10370fd16dac0b"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:38:41 crc kubenswrapper[4933]: I1201 09:38:41.743274 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://530143e76725bdde118c91d28d335ba05105652665eaa497ed10370fd16dac0b" gracePeriod=600 Dec 01 09:38:42 crc kubenswrapper[4933]: I1201 09:38:42.424491 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="530143e76725bdde118c91d28d335ba05105652665eaa497ed10370fd16dac0b" exitCode=0 Dec 01 09:38:42 crc kubenswrapper[4933]: I1201 09:38:42.424532 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"530143e76725bdde118c91d28d335ba05105652665eaa497ed10370fd16dac0b"} Dec 01 09:38:42 crc kubenswrapper[4933]: I1201 09:38:42.424864 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"f1d5bd612e31b9b4eae9b44f24aa08f8e8c6dbbfb00bd3bb8556671820bec1e5"} Dec 01 09:38:42 crc kubenswrapper[4933]: I1201 09:38:42.424893 4933 scope.go:117] "RemoveContainer" containerID="3e2ab7ed8b88ae080e337a6973dbf930b7f7c9d154f1fbfcc430bf51ad0c4c25" Dec 01 09:40:49 crc kubenswrapper[4933]: I1201 09:40:49.814665 4933 scope.go:117] "RemoveContainer" containerID="3cede0bf4cabf888a9bd44f0ee3514aca4d7a5f50aa450ab392bc92af7f98342" Dec 01 09:40:49 crc kubenswrapper[4933]: I1201 09:40:49.843276 4933 scope.go:117] "RemoveContainer" containerID="2ae67593b03352d9b1881b6ff881c605afac2082cf75dd88e7e71375ea413121" Dec 01 09:41:11 crc kubenswrapper[4933]: I1201 09:41:11.741503 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:41:11 crc kubenswrapper[4933]: I1201 09:41:11.742404 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:41:41 crc kubenswrapper[4933]: I1201 09:41:41.741053 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:41:41 crc kubenswrapper[4933]: I1201 09:41:41.741745 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:41:49 crc kubenswrapper[4933]: I1201 09:41:49.873621 4933 scope.go:117] "RemoveContainer" containerID="a52375e49236b7456d0bd44a3f2c4c74f30915d50e0bb5fb5a13cea383ce4c5c" Dec 01 09:41:49 crc kubenswrapper[4933]: I1201 09:41:49.892101 4933 scope.go:117] "RemoveContainer" containerID="7a55f0b45fc14128779cf55d2704208252d808b5ca2211f84c1bd95f7f05e565" Dec 01 09:41:49 crc kubenswrapper[4933]: I1201 09:41:49.906827 4933 scope.go:117] "RemoveContainer" containerID="2409398b81c2f1726c860d3e9c135a83a97b13ac18c93da758cd6d3bc3df75c2" Dec 01 09:41:49 crc kubenswrapper[4933]: I1201 09:41:49.923607 4933 scope.go:117] "RemoveContainer" containerID="02dd55a589fcc88f8ba5fcb71ad03d2aa2766238863c0b11e822abe58e90b356" Dec 01 09:42:11 crc kubenswrapper[4933]: I1201 09:42:11.741494 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:42:11 crc kubenswrapper[4933]: I1201 09:42:11.742579 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:42:11 crc kubenswrapper[4933]: I1201 09:42:11.742661 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:42:11 crc kubenswrapper[4933]: I1201 09:42:11.743846 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1d5bd612e31b9b4eae9b44f24aa08f8e8c6dbbfb00bd3bb8556671820bec1e5"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:42:11 crc kubenswrapper[4933]: I1201 09:42:11.743992 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://f1d5bd612e31b9b4eae9b44f24aa08f8e8c6dbbfb00bd3bb8556671820bec1e5" gracePeriod=600 Dec 01 09:42:12 crc kubenswrapper[4933]: I1201 09:42:12.057650 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="f1d5bd612e31b9b4eae9b44f24aa08f8e8c6dbbfb00bd3bb8556671820bec1e5" exitCode=0 Dec 01 09:42:12 crc kubenswrapper[4933]: I1201 09:42:12.057700 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"f1d5bd612e31b9b4eae9b44f24aa08f8e8c6dbbfb00bd3bb8556671820bec1e5"} Dec 01 09:42:12 crc kubenswrapper[4933]: I1201 09:42:12.058082 4933 scope.go:117] "RemoveContainer" containerID="530143e76725bdde118c91d28d335ba05105652665eaa497ed10370fd16dac0b" Dec 01 09:42:13 crc kubenswrapper[4933]: I1201 09:42:13.069150 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"9b9c9050f180243e388ba92ad81faccae53ee3940480103d59e4ab9a26921bbd"} Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.833851 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-5b7g6"] Dec 01 09:43:07 crc kubenswrapper[4933]: E1201 09:43:07.834635 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123185e0-6f42-4a97-8107-c1e8a91d0ea9" containerName="registry" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.834647 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="123185e0-6f42-4a97-8107-c1e8a91d0ea9" containerName="registry" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.834746 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="123185e0-6f42-4a97-8107-c1e8a91d0ea9" containerName="registry" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.835115 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-5b7g6" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.837371 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.837485 4933 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-6xx7n" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.837890 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.849547 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-5b7g6"] Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.854079 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ts7sf"] Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.854892 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-ts7sf" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.856769 4933 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-wgvfz" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.866781 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ts7sf"] Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.870724 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7glj\" (UniqueName: \"kubernetes.io/projected/b9576096-fd1b-4f6e-95c9-37517c77cca1-kube-api-access-c7glj\") pod \"cert-manager-5b446d88c5-ts7sf\" (UID: \"b9576096-fd1b-4f6e-95c9-37517c77cca1\") " pod="cert-manager/cert-manager-5b446d88c5-ts7sf" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.870807 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v782s\" (UniqueName: \"kubernetes.io/projected/d32d2e97-81db-4119-b9c6-a71b974a56a8-kube-api-access-v782s\") pod \"cert-manager-cainjector-7f985d654d-5b7g6\" (UID: \"d32d2e97-81db-4119-b9c6-a71b974a56a8\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-5b7g6" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.883628 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-x8fvf"] Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.884525 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-x8fvf" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.890503 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-x8fvf"] Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.891900 4933 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-ntv46" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.971805 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v782s\" (UniqueName: \"kubernetes.io/projected/d32d2e97-81db-4119-b9c6-a71b974a56a8-kube-api-access-v782s\") pod \"cert-manager-cainjector-7f985d654d-5b7g6\" (UID: \"d32d2e97-81db-4119-b9c6-a71b974a56a8\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-5b7g6" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.971851 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzlrk\" (UniqueName: \"kubernetes.io/projected/c54106fe-eb4b-4f41-afce-e9fde8067ec8-kube-api-access-qzlrk\") pod \"cert-manager-webhook-5655c58dd6-x8fvf\" (UID: \"c54106fe-eb4b-4f41-afce-e9fde8067ec8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-x8fvf" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.971898 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7glj\" (UniqueName: \"kubernetes.io/projected/b9576096-fd1b-4f6e-95c9-37517c77cca1-kube-api-access-c7glj\") pod \"cert-manager-5b446d88c5-ts7sf\" (UID: \"b9576096-fd1b-4f6e-95c9-37517c77cca1\") " pod="cert-manager/cert-manager-5b446d88c5-ts7sf" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.991031 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v782s\" (UniqueName: \"kubernetes.io/projected/d32d2e97-81db-4119-b9c6-a71b974a56a8-kube-api-access-v782s\") pod \"cert-manager-cainjector-7f985d654d-5b7g6\" (UID: \"d32d2e97-81db-4119-b9c6-a71b974a56a8\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-5b7g6" Dec 01 09:43:07 crc kubenswrapper[4933]: I1201 09:43:07.991834 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7glj\" (UniqueName: \"kubernetes.io/projected/b9576096-fd1b-4f6e-95c9-37517c77cca1-kube-api-access-c7glj\") pod \"cert-manager-5b446d88c5-ts7sf\" (UID: \"b9576096-fd1b-4f6e-95c9-37517c77cca1\") " pod="cert-manager/cert-manager-5b446d88c5-ts7sf" Dec 01 09:43:08 crc kubenswrapper[4933]: I1201 09:43:08.072501 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzlrk\" (UniqueName: \"kubernetes.io/projected/c54106fe-eb4b-4f41-afce-e9fde8067ec8-kube-api-access-qzlrk\") pod \"cert-manager-webhook-5655c58dd6-x8fvf\" (UID: \"c54106fe-eb4b-4f41-afce-e9fde8067ec8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-x8fvf" Dec 01 09:43:08 crc kubenswrapper[4933]: I1201 09:43:08.090913 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzlrk\" (UniqueName: \"kubernetes.io/projected/c54106fe-eb4b-4f41-afce-e9fde8067ec8-kube-api-access-qzlrk\") pod \"cert-manager-webhook-5655c58dd6-x8fvf\" (UID: \"c54106fe-eb4b-4f41-afce-e9fde8067ec8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-x8fvf" Dec 01 09:43:08 crc kubenswrapper[4933]: I1201 09:43:08.152930 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-5b7g6" Dec 01 09:43:08 crc kubenswrapper[4933]: I1201 09:43:08.169503 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-ts7sf" Dec 01 09:43:08 crc kubenswrapper[4933]: I1201 09:43:08.199216 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-x8fvf" Dec 01 09:43:08 crc kubenswrapper[4933]: I1201 09:43:08.364708 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-5b7g6"] Dec 01 09:43:08 crc kubenswrapper[4933]: I1201 09:43:08.381109 4933 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 09:43:08 crc kubenswrapper[4933]: I1201 09:43:08.397691 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ts7sf"] Dec 01 09:43:08 crc kubenswrapper[4933]: I1201 09:43:08.449023 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-x8fvf"] Dec 01 09:43:08 crc kubenswrapper[4933]: W1201 09:43:08.453797 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc54106fe_eb4b_4f41_afce_e9fde8067ec8.slice/crio-1769fb00d81d2e9a44de801e41adc85961cd06fb3c9715e1fbec3b7e0d026467 WatchSource:0}: Error finding container 1769fb00d81d2e9a44de801e41adc85961cd06fb3c9715e1fbec3b7e0d026467: Status 404 returned error can't find the container with id 1769fb00d81d2e9a44de801e41adc85961cd06fb3c9715e1fbec3b7e0d026467 Dec 01 09:43:09 crc kubenswrapper[4933]: I1201 09:43:09.375276 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ts7sf" event={"ID":"b9576096-fd1b-4f6e-95c9-37517c77cca1","Type":"ContainerStarted","Data":"935a30b5cf5b0f049d2c0185d2afba92359c16874276cf9cf9c58aee4b987b64"} Dec 01 09:43:09 crc kubenswrapper[4933]: I1201 09:43:09.376331 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-5b7g6" event={"ID":"d32d2e97-81db-4119-b9c6-a71b974a56a8","Type":"ContainerStarted","Data":"a38528d9fd92d1cc4742ecef8f90dda3ecc7f10639fc6400893617788202de44"} Dec 01 09:43:09 crc kubenswrapper[4933]: I1201 09:43:09.377360 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-x8fvf" event={"ID":"c54106fe-eb4b-4f41-afce-e9fde8067ec8","Type":"ContainerStarted","Data":"1769fb00d81d2e9a44de801e41adc85961cd06fb3c9715e1fbec3b7e0d026467"} Dec 01 09:43:12 crc kubenswrapper[4933]: I1201 09:43:12.394912 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ts7sf" event={"ID":"b9576096-fd1b-4f6e-95c9-37517c77cca1","Type":"ContainerStarted","Data":"23e4d4ce6eb77879202842b3f8116b24a264743e6e83139718a1459039e11da5"} Dec 01 09:43:12 crc kubenswrapper[4933]: I1201 09:43:12.400463 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-5b7g6" event={"ID":"d32d2e97-81db-4119-b9c6-a71b974a56a8","Type":"ContainerStarted","Data":"6f8efca3f4fb5d7d8f73f2c1bcf52fa491a109172a78a711d911f7a692c3bf2e"} Dec 01 09:43:12 crc kubenswrapper[4933]: I1201 09:43:12.403197 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-x8fvf" event={"ID":"c54106fe-eb4b-4f41-afce-e9fde8067ec8","Type":"ContainerStarted","Data":"c4e11ec804e53f57c456fa4f135edc14e99e2ebf83706897c55a5c6d38b54f3c"} Dec 01 09:43:12 crc kubenswrapper[4933]: I1201 09:43:12.403387 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-x8fvf" Dec 01 09:43:12 crc kubenswrapper[4933]: I1201 09:43:12.414064 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-ts7sf" podStartSLOduration=1.913972574 podStartE2EDuration="5.414044403s" podCreationTimestamp="2025-12-01 09:43:07 +0000 UTC" firstStartedPulling="2025-12-01 09:43:08.412205778 +0000 UTC m=+679.053929393" lastFinishedPulling="2025-12-01 09:43:11.912277607 +0000 UTC m=+682.554001222" observedRunningTime="2025-12-01 09:43:12.410791053 +0000 UTC m=+683.052514668" watchObservedRunningTime="2025-12-01 09:43:12.414044403 +0000 UTC m=+683.055768018" Dec 01 09:43:12 crc kubenswrapper[4933]: I1201 09:43:12.431257 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-x8fvf" podStartSLOduration=2.040866184 podStartE2EDuration="5.431240058s" podCreationTimestamp="2025-12-01 09:43:07 +0000 UTC" firstStartedPulling="2025-12-01 09:43:08.455661633 +0000 UTC m=+679.097385248" lastFinishedPulling="2025-12-01 09:43:11.846035507 +0000 UTC m=+682.487759122" observedRunningTime="2025-12-01 09:43:12.43011318 +0000 UTC m=+683.071836795" watchObservedRunningTime="2025-12-01 09:43:12.431240058 +0000 UTC m=+683.072963673" Dec 01 09:43:12 crc kubenswrapper[4933]: I1201 09:43:12.449853 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-5b7g6" podStartSLOduration=1.983497035 podStartE2EDuration="5.449834149s" podCreationTimestamp="2025-12-01 09:43:07 +0000 UTC" firstStartedPulling="2025-12-01 09:43:08.380803531 +0000 UTC m=+679.022527146" lastFinishedPulling="2025-12-01 09:43:11.847140645 +0000 UTC m=+682.488864260" observedRunningTime="2025-12-01 09:43:12.446549147 +0000 UTC m=+683.088272762" watchObservedRunningTime="2025-12-01 09:43:12.449834149 +0000 UTC m=+683.091557764" Dec 01 09:43:18 crc kubenswrapper[4933]: I1201 09:43:18.203922 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-x8fvf" Dec 01 09:43:27 crc kubenswrapper[4933]: I1201 09:43:27.998285 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zccpd"] Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:27.999088 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="nbdb" containerID="cri-o://d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d" gracePeriod=30 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:27.999248 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="sbdb" containerID="cri-o://726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1" gracePeriod=30 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:27.999230 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976" gracePeriod=30 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:27.999389 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="kube-rbac-proxy-node" containerID="cri-o://6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940" gracePeriod=30 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:27.999438 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="northd" containerID="cri-o://e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3" gracePeriod=30 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:27.999482 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovn-acl-logging" containerID="cri-o://c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38" gracePeriod=30 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:27.999052 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovn-controller" containerID="cri-o://07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1" gracePeriod=30 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.045283 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" containerID="cri-o://e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0" gracePeriod=30 Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.053075 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.065370 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.071534 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.071665 4933 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.352053 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/3.log" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.355643 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovn-acl-logging/0.log" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.356807 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovn-controller/0.log" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.357708 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.418567 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r692q"] Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.419138 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="kube-rbac-proxy-node" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.419225 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="kube-rbac-proxy-node" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.419297 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.419368 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.419430 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.419480 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.419530 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="northd" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.419589 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="northd" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.419677 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="kubecfg-setup" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.419735 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="kubecfg-setup" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.419790 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="nbdb" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.419847 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="nbdb" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.419922 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="sbdb" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.419975 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="sbdb" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.420029 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.420082 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.420155 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.420223 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.420286 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.420382 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.420445 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovn-acl-logging" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.420499 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovn-acl-logging" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.420545 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovn-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.420600 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovn-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.420761 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.420843 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="kube-rbac-proxy-node" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.420917 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="sbdb" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.420999 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="northd" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.421059 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.421131 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.421195 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="nbdb" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.421253 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovn-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.421326 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.421390 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.421452 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovn-acl-logging" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.421615 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.421672 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.421864 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerName="ovnkube-controller" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.423894 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442350 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-openvswitch\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442414 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442442 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-log-socket\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442467 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-systemd\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442488 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-systemd-units\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442523 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-ovnkube-config\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442574 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-etc-openvswitch\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442594 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-run-netns\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442637 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-node-log\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442662 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-run-ovn-kubernetes\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442687 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-env-overrides\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442728 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-slash\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442754 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-kubelet\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442772 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-cni-bin\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442790 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-var-lib-openvswitch\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442815 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d49bee31-b7e9-4daa-986f-b6f58c663813-ovn-node-metrics-cert\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442837 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-cni-netd\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442856 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-ovn\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442880 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9968\" (UniqueName: \"kubernetes.io/projected/d49bee31-b7e9-4daa-986f-b6f58c663813-kube-api-access-d9968\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.442911 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-ovnkube-script-lib\") pod \"d49bee31-b7e9-4daa-986f-b6f58c663813\" (UID: \"d49bee31-b7e9-4daa-986f-b6f58c663813\") " Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443115 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443167 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443249 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443471 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443503 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-slash" (OuterVolumeSpecName: "host-slash") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443524 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-log-socket" (OuterVolumeSpecName: "log-socket") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443537 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-node-log" (OuterVolumeSpecName: "node-log") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443554 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443574 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443591 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443555 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443592 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443615 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443465 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443673 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.443830 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.444924 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.451064 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49bee31-b7e9-4daa-986f-b6f58c663813-kube-api-access-d9968" (OuterVolumeSpecName: "kube-api-access-d9968") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "kube-api-access-d9968". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.451467 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49bee31-b7e9-4daa-986f-b6f58c663813-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.461103 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d49bee31-b7e9-4daa-986f-b6f58c663813" (UID: "d49bee31-b7e9-4daa-986f-b6f58c663813"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.490166 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovnkube-controller/3.log" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.493259 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovn-acl-logging/0.log" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.493897 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zccpd_d49bee31-b7e9-4daa-986f-b6f58c663813/ovn-controller/0.log" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494516 4933 generic.go:334] "Generic (PLEG): container finished" podID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerID="e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0" exitCode=0 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494555 4933 generic.go:334] "Generic (PLEG): container finished" podID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerID="726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1" exitCode=0 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494567 4933 generic.go:334] "Generic (PLEG): container finished" podID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerID="d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d" exitCode=0 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494575 4933 generic.go:334] "Generic (PLEG): container finished" podID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerID="e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3" exitCode=0 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494585 4933 generic.go:334] "Generic (PLEG): container finished" podID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerID="c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976" exitCode=0 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494595 4933 generic.go:334] "Generic (PLEG): container finished" podID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerID="6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940" exitCode=0 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494603 4933 generic.go:334] "Generic (PLEG): container finished" podID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerID="c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38" exitCode=143 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494611 4933 generic.go:334] "Generic (PLEG): container finished" podID="d49bee31-b7e9-4daa-986f-b6f58c663813" containerID="07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1" exitCode=143 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494683 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494696 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494732 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494746 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494754 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494764 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494777 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494792 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494804 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494811 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494816 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494821 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494827 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494832 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494836 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494842 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494848 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494855 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494862 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494868 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494873 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494878 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494883 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494888 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494893 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494899 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494905 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494912 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494924 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494931 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494937 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494942 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494947 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494952 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494959 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494964 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494969 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494975 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494981 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zccpd" event={"ID":"d49bee31-b7e9-4daa-986f-b6f58c663813","Type":"ContainerDied","Data":"81a563a9935749eba26db6f6e6876cc28a7d87e9118941b7816dd644cb486c78"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494988 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494994 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.494999 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.495004 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.495009 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.495014 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.495019 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.495024 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.495031 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.495048 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.495065 4933 scope.go:117] "RemoveContainer" containerID="e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.498800 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4fncv_f0c7b4b8-8e07-4bd4-b811-cdb373873e8a/kube-multus/2.log" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.500580 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4fncv_f0c7b4b8-8e07-4bd4-b811-cdb373873e8a/kube-multus/1.log" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.501041 4933 generic.go:334] "Generic (PLEG): container finished" podID="f0c7b4b8-8e07-4bd4-b811-cdb373873e8a" containerID="fa67bf1207226c8b6f7a005f4e479007b6cf584107103b695e65b9c6c160fbed" exitCode=2 Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.501094 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4fncv" event={"ID":"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a","Type":"ContainerDied","Data":"fa67bf1207226c8b6f7a005f4e479007b6cf584107103b695e65b9c6c160fbed"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.501138 4933 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f"} Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.501710 4933 scope.go:117] "RemoveContainer" containerID="fa67bf1207226c8b6f7a005f4e479007b6cf584107103b695e65b9c6c160fbed" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.502052 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4fncv_openshift-multus(f0c7b4b8-8e07-4bd4-b811-cdb373873e8a)\"" pod="openshift-multus/multus-4fncv" podUID="f0c7b4b8-8e07-4bd4-b811-cdb373873e8a" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.520419 4933 scope.go:117] "RemoveContainer" containerID="74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544612 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-run-netns\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544676 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-cni-bin\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544692 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-node-log\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544711 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-env-overrides\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544736 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-etc-openvswitch\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544759 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-slash\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544778 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-run-ovn-kubernetes\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544805 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-var-lib-openvswitch\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544841 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwjb6\" (UniqueName: \"kubernetes.io/projected/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-kube-api-access-kwjb6\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544859 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-log-socket\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544881 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-cni-netd\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544901 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544926 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-kubelet\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544943 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-run-systemd\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544962 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-run-ovn\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.544982 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-ovn-node-metrics-cert\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545001 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-systemd-units\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545019 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-ovnkube-config\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545037 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-ovnkube-script-lib\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545058 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-run-openvswitch\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545103 4933 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d49bee31-b7e9-4daa-986f-b6f58c663813-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545115 4933 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545125 4933 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545135 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9968\" (UniqueName: \"kubernetes.io/projected/d49bee31-b7e9-4daa-986f-b6f58c663813-kube-api-access-d9968\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545144 4933 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545154 4933 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545164 4933 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545173 4933 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-log-socket\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545182 4933 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545193 4933 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545202 4933 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545210 4933 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545219 4933 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545228 4933 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-node-log\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545239 4933 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545247 4933 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d49bee31-b7e9-4daa-986f-b6f58c663813-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545255 4933 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-slash\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545264 4933 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545272 4933 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.545281 4933 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d49bee31-b7e9-4daa-986f-b6f58c663813-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.549776 4933 scope.go:117] "RemoveContainer" containerID="726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.550468 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zccpd"] Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.560638 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zccpd"] Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.567185 4933 scope.go:117] "RemoveContainer" containerID="d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.581253 4933 scope.go:117] "RemoveContainer" containerID="e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.598226 4933 scope.go:117] "RemoveContainer" containerID="c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.616247 4933 scope.go:117] "RemoveContainer" containerID="6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.633908 4933 scope.go:117] "RemoveContainer" containerID="c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.646719 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-systemd-units\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.646790 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-ovnkube-config\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.646831 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-ovnkube-script-lib\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.646857 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-run-openvswitch\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.646891 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-systemd-units\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.646924 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-run-netns\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.646963 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-run-openvswitch\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647024 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-run-netns\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647079 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-cni-bin\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647097 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-node-log\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647117 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-env-overrides\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647169 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-cni-bin\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647362 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-node-log\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647545 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-ovnkube-config\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647604 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-etc-openvswitch\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647649 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-env-overrides\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647669 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-etc-openvswitch\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647677 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-slash\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647703 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-slash\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647753 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-run-ovn-kubernetes\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647780 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-run-ovn-kubernetes\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647800 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-var-lib-openvswitch\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647855 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwjb6\" (UniqueName: \"kubernetes.io/projected/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-kube-api-access-kwjb6\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647895 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-log-socket\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647955 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-cni-netd\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647994 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.648037 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-kubelet\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.648069 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-run-systemd\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.648099 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-run-ovn\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.648160 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-ovn-node-metrics-cert\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.648173 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-ovnkube-script-lib\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.647856 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-var-lib-openvswitch\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.648268 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-run-systemd\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.648335 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-run-ovn\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.648372 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-cni-netd\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.648595 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-log-socket\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.648594 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.648628 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-host-kubelet\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.650059 4933 scope.go:117] "RemoveContainer" containerID="07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.652968 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-ovn-node-metrics-cert\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.666363 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwjb6\" (UniqueName: \"kubernetes.io/projected/e845beb8-c3c0-464e-95a6-4e7e9628ebd2-kube-api-access-kwjb6\") pod \"ovnkube-node-r692q\" (UID: \"e845beb8-c3c0-464e-95a6-4e7e9628ebd2\") " pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.668329 4933 scope.go:117] "RemoveContainer" containerID="f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.684941 4933 scope.go:117] "RemoveContainer" containerID="e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.685516 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0\": container with ID starting with e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0 not found: ID does not exist" containerID="e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.685559 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0"} err="failed to get container status \"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0\": rpc error: code = NotFound desc = could not find container \"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0\": container with ID starting with e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.685596 4933 scope.go:117] "RemoveContainer" containerID="74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.686070 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\": container with ID starting with 74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d not found: ID does not exist" containerID="74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.686135 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d"} err="failed to get container status \"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\": rpc error: code = NotFound desc = could not find container \"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\": container with ID starting with 74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.686166 4933 scope.go:117] "RemoveContainer" containerID="726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.686520 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\": container with ID starting with 726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1 not found: ID does not exist" containerID="726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.686541 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1"} err="failed to get container status \"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\": rpc error: code = NotFound desc = could not find container \"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\": container with ID starting with 726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.686569 4933 scope.go:117] "RemoveContainer" containerID="d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.686818 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\": container with ID starting with d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d not found: ID does not exist" containerID="d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.686862 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d"} err="failed to get container status \"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\": rpc error: code = NotFound desc = could not find container \"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\": container with ID starting with d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.686875 4933 scope.go:117] "RemoveContainer" containerID="e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.687117 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\": container with ID starting with e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3 not found: ID does not exist" containerID="e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.687161 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3"} err="failed to get container status \"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\": rpc error: code = NotFound desc = could not find container \"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\": container with ID starting with e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.687180 4933 scope.go:117] "RemoveContainer" containerID="c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.687720 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\": container with ID starting with c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976 not found: ID does not exist" containerID="c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.687769 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976"} err="failed to get container status \"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\": rpc error: code = NotFound desc = could not find container \"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\": container with ID starting with c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.687787 4933 scope.go:117] "RemoveContainer" containerID="6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.688160 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\": container with ID starting with 6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940 not found: ID does not exist" containerID="6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.688187 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940"} err="failed to get container status \"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\": rpc error: code = NotFound desc = could not find container \"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\": container with ID starting with 6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.688222 4933 scope.go:117] "RemoveContainer" containerID="c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.688715 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\": container with ID starting with c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38 not found: ID does not exist" containerID="c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.688765 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38"} err="failed to get container status \"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\": rpc error: code = NotFound desc = could not find container \"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\": container with ID starting with c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.688782 4933 scope.go:117] "RemoveContainer" containerID="07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.689285 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\": container with ID starting with 07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1 not found: ID does not exist" containerID="07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.689373 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1"} err="failed to get container status \"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\": rpc error: code = NotFound desc = could not find container \"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\": container with ID starting with 07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.689395 4933 scope.go:117] "RemoveContainer" containerID="f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9" Dec 01 09:43:28 crc kubenswrapper[4933]: E1201 09:43:28.689723 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\": container with ID starting with f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9 not found: ID does not exist" containerID="f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.689753 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9"} err="failed to get container status \"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\": rpc error: code = NotFound desc = could not find container \"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\": container with ID starting with f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.689802 4933 scope.go:117] "RemoveContainer" containerID="e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.690163 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0"} err="failed to get container status \"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0\": rpc error: code = NotFound desc = could not find container \"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0\": container with ID starting with e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.690202 4933 scope.go:117] "RemoveContainer" containerID="74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.690667 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d"} err="failed to get container status \"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\": rpc error: code = NotFound desc = could not find container \"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\": container with ID starting with 74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.690717 4933 scope.go:117] "RemoveContainer" containerID="726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.691099 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1"} err="failed to get container status \"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\": rpc error: code = NotFound desc = could not find container \"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\": container with ID starting with 726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.691124 4933 scope.go:117] "RemoveContainer" containerID="d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.691545 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d"} err="failed to get container status \"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\": rpc error: code = NotFound desc = could not find container \"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\": container with ID starting with d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.691575 4933 scope.go:117] "RemoveContainer" containerID="e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.691855 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3"} err="failed to get container status \"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\": rpc error: code = NotFound desc = could not find container \"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\": container with ID starting with e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.691890 4933 scope.go:117] "RemoveContainer" containerID="c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.692258 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976"} err="failed to get container status \"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\": rpc error: code = NotFound desc = could not find container \"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\": container with ID starting with c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.692292 4933 scope.go:117] "RemoveContainer" containerID="6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.692755 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940"} err="failed to get container status \"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\": rpc error: code = NotFound desc = could not find container \"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\": container with ID starting with 6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.692834 4933 scope.go:117] "RemoveContainer" containerID="c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.693387 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38"} err="failed to get container status \"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\": rpc error: code = NotFound desc = could not find container \"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\": container with ID starting with c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.693411 4933 scope.go:117] "RemoveContainer" containerID="07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.693726 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1"} err="failed to get container status \"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\": rpc error: code = NotFound desc = could not find container \"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\": container with ID starting with 07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.693754 4933 scope.go:117] "RemoveContainer" containerID="f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.694124 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9"} err="failed to get container status \"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\": rpc error: code = NotFound desc = could not find container \"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\": container with ID starting with f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.694168 4933 scope.go:117] "RemoveContainer" containerID="e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.694602 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0"} err="failed to get container status \"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0\": rpc error: code = NotFound desc = could not find container \"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0\": container with ID starting with e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.694629 4933 scope.go:117] "RemoveContainer" containerID="74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.694930 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d"} err="failed to get container status \"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\": rpc error: code = NotFound desc = could not find container \"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\": container with ID starting with 74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.694951 4933 scope.go:117] "RemoveContainer" containerID="726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.695201 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1"} err="failed to get container status \"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\": rpc error: code = NotFound desc = could not find container \"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\": container with ID starting with 726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.695233 4933 scope.go:117] "RemoveContainer" containerID="d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.695646 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d"} err="failed to get container status \"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\": rpc error: code = NotFound desc = could not find container \"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\": container with ID starting with d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.695670 4933 scope.go:117] "RemoveContainer" containerID="e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.695947 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3"} err="failed to get container status \"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\": rpc error: code = NotFound desc = could not find container \"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\": container with ID starting with e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.695968 4933 scope.go:117] "RemoveContainer" containerID="c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.696208 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976"} err="failed to get container status \"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\": rpc error: code = NotFound desc = could not find container \"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\": container with ID starting with c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.696232 4933 scope.go:117] "RemoveContainer" containerID="6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.696642 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940"} err="failed to get container status \"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\": rpc error: code = NotFound desc = could not find container \"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\": container with ID starting with 6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.696670 4933 scope.go:117] "RemoveContainer" containerID="c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.697011 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38"} err="failed to get container status \"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\": rpc error: code = NotFound desc = could not find container \"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\": container with ID starting with c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.697077 4933 scope.go:117] "RemoveContainer" containerID="07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.697434 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1"} err="failed to get container status \"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\": rpc error: code = NotFound desc = could not find container \"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\": container with ID starting with 07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.697461 4933 scope.go:117] "RemoveContainer" containerID="f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.697736 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9"} err="failed to get container status \"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\": rpc error: code = NotFound desc = could not find container \"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\": container with ID starting with f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.697757 4933 scope.go:117] "RemoveContainer" containerID="e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.698082 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0"} err="failed to get container status \"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0\": rpc error: code = NotFound desc = could not find container \"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0\": container with ID starting with e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.698118 4933 scope.go:117] "RemoveContainer" containerID="74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.698372 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d"} err="failed to get container status \"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\": rpc error: code = NotFound desc = could not find container \"74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d\": container with ID starting with 74f07a55448967411696b183c294e6f59af59d73c5b214b06830dfc34658fc0d not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.698399 4933 scope.go:117] "RemoveContainer" containerID="726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.698785 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1"} err="failed to get container status \"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\": rpc error: code = NotFound desc = could not find container \"726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1\": container with ID starting with 726055ba16fcd1b948ba157c0d30e76d2a23d686f8ea3f66977c072db2fea5d1 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.698812 4933 scope.go:117] "RemoveContainer" containerID="d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.699115 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d"} err="failed to get container status \"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\": rpc error: code = NotFound desc = could not find container \"d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d\": container with ID starting with d3bd33bdefd392f68dd4dbfd2682d892f9e5d753d4b55bd375c9575e2a05297d not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.699148 4933 scope.go:117] "RemoveContainer" containerID="e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.699456 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3"} err="failed to get container status \"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\": rpc error: code = NotFound desc = could not find container \"e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3\": container with ID starting with e04ae3e897e01778c87aa4a487b57f86fad19ed9e7704c987ae2795fcb5451b3 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.699482 4933 scope.go:117] "RemoveContainer" containerID="c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.699847 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976"} err="failed to get container status \"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\": rpc error: code = NotFound desc = could not find container \"c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976\": container with ID starting with c97d24f56519fdfafae896d8c5ccf5bd283cb07926bc86d656b6e4269d136976 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.699872 4933 scope.go:117] "RemoveContainer" containerID="6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.700123 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940"} err="failed to get container status \"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\": rpc error: code = NotFound desc = could not find container \"6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940\": container with ID starting with 6a170e38168c5dc7726c2418f7e9e1b16c0f6184d72abca862acf32eb9171940 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.700142 4933 scope.go:117] "RemoveContainer" containerID="c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.700372 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38"} err="failed to get container status \"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\": rpc error: code = NotFound desc = could not find container \"c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38\": container with ID starting with c8de0d5f4fd41586133a6d32b09047b48a8fa2095f23f1475d77953bf7854e38 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.700397 4933 scope.go:117] "RemoveContainer" containerID="07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.700664 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1"} err="failed to get container status \"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\": rpc error: code = NotFound desc = could not find container \"07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1\": container with ID starting with 07d13ab40dde56ecef5e1e1dcfd0452cf96171abf786248c838498f74f8eeeb1 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.700691 4933 scope.go:117] "RemoveContainer" containerID="f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.700923 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9"} err="failed to get container status \"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\": rpc error: code = NotFound desc = could not find container \"f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9\": container with ID starting with f7d37e7a13383d636baf2c86d5f5f236b17633490c284e0289bd9574ca4ccfa9 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.700950 4933 scope.go:117] "RemoveContainer" containerID="e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.701198 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0"} err="failed to get container status \"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0\": rpc error: code = NotFound desc = could not find container \"e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0\": container with ID starting with e173a176f9d368dcc060ffc3aba412c2c170505b57de11960fc6fcdb86f265d0 not found: ID does not exist" Dec 01 09:43:28 crc kubenswrapper[4933]: I1201 09:43:28.748476 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:28 crc kubenswrapper[4933]: W1201 09:43:28.770935 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode845beb8_c3c0_464e_95a6_4e7e9628ebd2.slice/crio-43b3cac42bdc27a086ae75625cac81ae4bc1d3b86cda8a63b56ba6e470ecdc38 WatchSource:0}: Error finding container 43b3cac42bdc27a086ae75625cac81ae4bc1d3b86cda8a63b56ba6e470ecdc38: Status 404 returned error can't find the container with id 43b3cac42bdc27a086ae75625cac81ae4bc1d3b86cda8a63b56ba6e470ecdc38 Dec 01 09:43:29 crc kubenswrapper[4933]: I1201 09:43:29.508699 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" event={"ID":"e845beb8-c3c0-464e-95a6-4e7e9628ebd2","Type":"ContainerStarted","Data":"43b3cac42bdc27a086ae75625cac81ae4bc1d3b86cda8a63b56ba6e470ecdc38"} Dec 01 09:43:29 crc kubenswrapper[4933]: I1201 09:43:29.676017 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49bee31-b7e9-4daa-986f-b6f58c663813" path="/var/lib/kubelet/pods/d49bee31-b7e9-4daa-986f-b6f58c663813/volumes" Dec 01 09:43:30 crc kubenswrapper[4933]: I1201 09:43:30.518503 4933 generic.go:334] "Generic (PLEG): container finished" podID="e845beb8-c3c0-464e-95a6-4e7e9628ebd2" containerID="812e5a0bdbb1c44c358612f77152ab4a204c8e94778942537c67b3b6b3704994" exitCode=0 Dec 01 09:43:30 crc kubenswrapper[4933]: I1201 09:43:30.518560 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" event={"ID":"e845beb8-c3c0-464e-95a6-4e7e9628ebd2","Type":"ContainerDied","Data":"812e5a0bdbb1c44c358612f77152ab4a204c8e94778942537c67b3b6b3704994"} Dec 01 09:43:31 crc kubenswrapper[4933]: I1201 09:43:31.528262 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" event={"ID":"e845beb8-c3c0-464e-95a6-4e7e9628ebd2","Type":"ContainerStarted","Data":"c428433f1a9e10ce3517c38416687ee43428cf2fe65f3fa6b4e80359c64110b6"} Dec 01 09:43:31 crc kubenswrapper[4933]: I1201 09:43:31.528589 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" event={"ID":"e845beb8-c3c0-464e-95a6-4e7e9628ebd2","Type":"ContainerStarted","Data":"80cd1372a8e96612c6f38edcc9b807d09243862f8dcf6f0ef71527aa2170308b"} Dec 01 09:43:31 crc kubenswrapper[4933]: I1201 09:43:31.528602 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" event={"ID":"e845beb8-c3c0-464e-95a6-4e7e9628ebd2","Type":"ContainerStarted","Data":"c85254ec1f4b861e03a0a990d3c1a63268c83a26f0986f4210d6947fcad996a9"} Dec 01 09:43:31 crc kubenswrapper[4933]: I1201 09:43:31.528611 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" event={"ID":"e845beb8-c3c0-464e-95a6-4e7e9628ebd2","Type":"ContainerStarted","Data":"75b1e061da6e83f8d1fcee0ef61ac1445585a9168c89d8f6ad7cd94ffb999c7d"} Dec 01 09:43:31 crc kubenswrapper[4933]: I1201 09:43:31.528620 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" event={"ID":"e845beb8-c3c0-464e-95a6-4e7e9628ebd2","Type":"ContainerStarted","Data":"5bd22908cdab1b633476f9af82ab78bc41dedee591c0b6af2b5cb4417058d4cf"} Dec 01 09:43:31 crc kubenswrapper[4933]: I1201 09:43:31.528629 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" event={"ID":"e845beb8-c3c0-464e-95a6-4e7e9628ebd2","Type":"ContainerStarted","Data":"5a9f0506a86d571280797da1cc044e412ac0bb2efc0e261f3911e644336bd97e"} Dec 01 09:43:33 crc kubenswrapper[4933]: I1201 09:43:33.543075 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" event={"ID":"e845beb8-c3c0-464e-95a6-4e7e9628ebd2","Type":"ContainerStarted","Data":"9294c4757385e9f36a060674d9f475343bc8e4c68c55899c0c4880baac54f3b7"} Dec 01 09:43:36 crc kubenswrapper[4933]: I1201 09:43:36.564056 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" event={"ID":"e845beb8-c3c0-464e-95a6-4e7e9628ebd2","Type":"ContainerStarted","Data":"9b9e97d5ddadbcd4c30ff80957a8b97ed27365a22b54a6ee7a5e127417b74079"} Dec 01 09:43:36 crc kubenswrapper[4933]: I1201 09:43:36.564584 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:36 crc kubenswrapper[4933]: I1201 09:43:36.564643 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:36 crc kubenswrapper[4933]: I1201 09:43:36.564657 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:36 crc kubenswrapper[4933]: I1201 09:43:36.592293 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:36 crc kubenswrapper[4933]: I1201 09:43:36.593489 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:36 crc kubenswrapper[4933]: I1201 09:43:36.601206 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" podStartSLOduration=8.601163056 podStartE2EDuration="8.601163056s" podCreationTimestamp="2025-12-01 09:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:43:36.597996628 +0000 UTC m=+707.239720273" watchObservedRunningTime="2025-12-01 09:43:36.601163056 +0000 UTC m=+707.242886681" Dec 01 09:43:43 crc kubenswrapper[4933]: I1201 09:43:43.668472 4933 scope.go:117] "RemoveContainer" containerID="fa67bf1207226c8b6f7a005f4e479007b6cf584107103b695e65b9c6c160fbed" Dec 01 09:43:43 crc kubenswrapper[4933]: E1201 09:43:43.669705 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4fncv_openshift-multus(f0c7b4b8-8e07-4bd4-b811-cdb373873e8a)\"" pod="openshift-multus/multus-4fncv" podUID="f0c7b4b8-8e07-4bd4-b811-cdb373873e8a" Dec 01 09:43:49 crc kubenswrapper[4933]: I1201 09:43:49.976519 4933 scope.go:117] "RemoveContainer" containerID="1ac251024105496fb2cd821720a3ad6e717ef9c6da03401d62a0d58a96dce58f" Dec 01 09:43:50 crc kubenswrapper[4933]: I1201 09:43:50.656060 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4fncv_f0c7b4b8-8e07-4bd4-b811-cdb373873e8a/kube-multus/2.log" Dec 01 09:43:55 crc kubenswrapper[4933]: I1201 09:43:55.667818 4933 scope.go:117] "RemoveContainer" containerID="fa67bf1207226c8b6f7a005f4e479007b6cf584107103b695e65b9c6c160fbed" Dec 01 09:43:56 crc kubenswrapper[4933]: I1201 09:43:56.705847 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4fncv_f0c7b4b8-8e07-4bd4-b811-cdb373873e8a/kube-multus/2.log" Dec 01 09:43:56 crc kubenswrapper[4933]: I1201 09:43:56.706239 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4fncv" event={"ID":"f0c7b4b8-8e07-4bd4-b811-cdb373873e8a","Type":"ContainerStarted","Data":"0ef89e2561d16052f92d29509f00161f52fa550bb2f825915b2afeceb4498f66"} Dec 01 09:43:58 crc kubenswrapper[4933]: I1201 09:43:58.776607 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r692q" Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.250266 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm"] Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.251636 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.254137 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.261460 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm"] Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.361770 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvr9x\" (UniqueName: \"kubernetes.io/projected/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-kube-api-access-hvr9x\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm\" (UID: \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.361884 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm\" (UID: \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.361973 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm\" (UID: \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.463562 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm\" (UID: \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.463644 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvr9x\" (UniqueName: \"kubernetes.io/projected/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-kube-api-access-hvr9x\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm\" (UID: \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.463677 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm\" (UID: \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.464141 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm\" (UID: \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.464171 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm\" (UID: \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.487589 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvr9x\" (UniqueName: \"kubernetes.io/projected/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-kube-api-access-hvr9x\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm\" (UID: \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.573118 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" Dec 01 09:43:59 crc kubenswrapper[4933]: I1201 09:43:59.767650 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm"] Dec 01 09:43:59 crc kubenswrapper[4933]: W1201 09:43:59.774639 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61cb8d89_75c1_4be3_9c9b_ff1337d73c4e.slice/crio-8be3a3ea6f4dadcb24e39374e787531c8a1e4e3e5666857f1fa479077ce217cd WatchSource:0}: Error finding container 8be3a3ea6f4dadcb24e39374e787531c8a1e4e3e5666857f1fa479077ce217cd: Status 404 returned error can't find the container with id 8be3a3ea6f4dadcb24e39374e787531c8a1e4e3e5666857f1fa479077ce217cd Dec 01 09:44:00 crc kubenswrapper[4933]: I1201 09:44:00.730322 4933 generic.go:334] "Generic (PLEG): container finished" podID="61cb8d89-75c1-4be3-9c9b-ff1337d73c4e" containerID="ef22bb2c64d432a3ff193e1d4aed223dfc30c759365bc95066b1b0c89882fd82" exitCode=0 Dec 01 09:44:00 crc kubenswrapper[4933]: I1201 09:44:00.730393 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" event={"ID":"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e","Type":"ContainerDied","Data":"ef22bb2c64d432a3ff193e1d4aed223dfc30c759365bc95066b1b0c89882fd82"} Dec 01 09:44:00 crc kubenswrapper[4933]: I1201 09:44:00.730438 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" event={"ID":"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e","Type":"ContainerStarted","Data":"8be3a3ea6f4dadcb24e39374e787531c8a1e4e3e5666857f1fa479077ce217cd"} Dec 01 09:44:04 crc kubenswrapper[4933]: I1201 09:44:04.757350 4933 generic.go:334] "Generic (PLEG): container finished" podID="61cb8d89-75c1-4be3-9c9b-ff1337d73c4e" containerID="ea576d77ab63bcdfe2e246943eadc4e03b99c48c69ff74260841172e56f474b6" exitCode=0 Dec 01 09:44:04 crc kubenswrapper[4933]: I1201 09:44:04.757443 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" event={"ID":"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e","Type":"ContainerDied","Data":"ea576d77ab63bcdfe2e246943eadc4e03b99c48c69ff74260841172e56f474b6"} Dec 01 09:44:05 crc kubenswrapper[4933]: I1201 09:44:05.767762 4933 generic.go:334] "Generic (PLEG): container finished" podID="61cb8d89-75c1-4be3-9c9b-ff1337d73c4e" containerID="1216d7ef340f7eae0b4b885bd89c7713694b3ab46bc3942724c375170873a983" exitCode=0 Dec 01 09:44:05 crc kubenswrapper[4933]: I1201 09:44:05.767817 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" event={"ID":"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e","Type":"ContainerDied","Data":"1216d7ef340f7eae0b4b885bd89c7713694b3ab46bc3942724c375170873a983"} Dec 01 09:44:06 crc kubenswrapper[4933]: I1201 09:44:06.989700 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" Dec 01 09:44:07 crc kubenswrapper[4933]: I1201 09:44:07.082712 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-util\") pod \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\" (UID: \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\") " Dec 01 09:44:07 crc kubenswrapper[4933]: I1201 09:44:07.082800 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvr9x\" (UniqueName: \"kubernetes.io/projected/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-kube-api-access-hvr9x\") pod \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\" (UID: \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\") " Dec 01 09:44:07 crc kubenswrapper[4933]: I1201 09:44:07.082827 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-bundle\") pod \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\" (UID: \"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e\") " Dec 01 09:44:07 crc kubenswrapper[4933]: I1201 09:44:07.084154 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-bundle" (OuterVolumeSpecName: "bundle") pod "61cb8d89-75c1-4be3-9c9b-ff1337d73c4e" (UID: "61cb8d89-75c1-4be3-9c9b-ff1337d73c4e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:44:07 crc kubenswrapper[4933]: I1201 09:44:07.093221 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-kube-api-access-hvr9x" (OuterVolumeSpecName: "kube-api-access-hvr9x") pod "61cb8d89-75c1-4be3-9c9b-ff1337d73c4e" (UID: "61cb8d89-75c1-4be3-9c9b-ff1337d73c4e"). InnerVolumeSpecName "kube-api-access-hvr9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:44:07 crc kubenswrapper[4933]: I1201 09:44:07.093975 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-util" (OuterVolumeSpecName: "util") pod "61cb8d89-75c1-4be3-9c9b-ff1337d73c4e" (UID: "61cb8d89-75c1-4be3-9c9b-ff1337d73c4e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:44:07 crc kubenswrapper[4933]: I1201 09:44:07.184129 4933 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-util\") on node \"crc\" DevicePath \"\"" Dec 01 09:44:07 crc kubenswrapper[4933]: I1201 09:44:07.184178 4933 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:44:07 crc kubenswrapper[4933]: I1201 09:44:07.184191 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvr9x\" (UniqueName: \"kubernetes.io/projected/61cb8d89-75c1-4be3-9c9b-ff1337d73c4e-kube-api-access-hvr9x\") on node \"crc\" DevicePath \"\"" Dec 01 09:44:07 crc kubenswrapper[4933]: I1201 09:44:07.783264 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" event={"ID":"61cb8d89-75c1-4be3-9c9b-ff1337d73c4e","Type":"ContainerDied","Data":"8be3a3ea6f4dadcb24e39374e787531c8a1e4e3e5666857f1fa479077ce217cd"} Dec 01 09:44:07 crc kubenswrapper[4933]: I1201 09:44:07.783355 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8be3a3ea6f4dadcb24e39374e787531c8a1e4e3e5666857f1fa479077ce217cd" Dec 01 09:44:07 crc kubenswrapper[4933]: I1201 09:44:07.783430 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm" Dec 01 09:44:10 crc kubenswrapper[4933]: I1201 09:44:10.812870 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-l6tck"] Dec 01 09:44:10 crc kubenswrapper[4933]: E1201 09:44:10.814608 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cb8d89-75c1-4be3-9c9b-ff1337d73c4e" containerName="extract" Dec 01 09:44:10 crc kubenswrapper[4933]: I1201 09:44:10.819933 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cb8d89-75c1-4be3-9c9b-ff1337d73c4e" containerName="extract" Dec 01 09:44:10 crc kubenswrapper[4933]: E1201 09:44:10.820001 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cb8d89-75c1-4be3-9c9b-ff1337d73c4e" containerName="pull" Dec 01 09:44:10 crc kubenswrapper[4933]: I1201 09:44:10.820011 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cb8d89-75c1-4be3-9c9b-ff1337d73c4e" containerName="pull" Dec 01 09:44:10 crc kubenswrapper[4933]: E1201 09:44:10.820043 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cb8d89-75c1-4be3-9c9b-ff1337d73c4e" containerName="util" Dec 01 09:44:10 crc kubenswrapper[4933]: I1201 09:44:10.820056 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cb8d89-75c1-4be3-9c9b-ff1337d73c4e" containerName="util" Dec 01 09:44:10 crc kubenswrapper[4933]: I1201 09:44:10.820412 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cb8d89-75c1-4be3-9c9b-ff1337d73c4e" containerName="extract" Dec 01 09:44:10 crc kubenswrapper[4933]: I1201 09:44:10.820973 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l6tck" Dec 01 09:44:10 crc kubenswrapper[4933]: I1201 09:44:10.823777 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-pnjxb" Dec 01 09:44:10 crc kubenswrapper[4933]: I1201 09:44:10.824426 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 01 09:44:10 crc kubenswrapper[4933]: I1201 09:44:10.824856 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 01 09:44:10 crc kubenswrapper[4933]: I1201 09:44:10.837842 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-l6tck"] Dec 01 09:44:10 crc kubenswrapper[4933]: I1201 09:44:10.838082 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnk52\" (UniqueName: \"kubernetes.io/projected/c3ce2443-f386-4203-8c92-f0961122fb6b-kube-api-access-rnk52\") pod \"nmstate-operator-5b5b58f5c8-l6tck\" (UID: \"c3ce2443-f386-4203-8c92-f0961122fb6b\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l6tck" Dec 01 09:44:10 crc kubenswrapper[4933]: I1201 09:44:10.939857 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnk52\" (UniqueName: \"kubernetes.io/projected/c3ce2443-f386-4203-8c92-f0961122fb6b-kube-api-access-rnk52\") pod \"nmstate-operator-5b5b58f5c8-l6tck\" (UID: \"c3ce2443-f386-4203-8c92-f0961122fb6b\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l6tck" Dec 01 09:44:10 crc kubenswrapper[4933]: I1201 09:44:10.963724 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnk52\" (UniqueName: \"kubernetes.io/projected/c3ce2443-f386-4203-8c92-f0961122fb6b-kube-api-access-rnk52\") pod \"nmstate-operator-5b5b58f5c8-l6tck\" (UID: \"c3ce2443-f386-4203-8c92-f0961122fb6b\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l6tck" Dec 01 09:44:11 crc kubenswrapper[4933]: I1201 09:44:11.139395 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l6tck" Dec 01 09:44:11 crc kubenswrapper[4933]: I1201 09:44:11.376981 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-l6tck"] Dec 01 09:44:11 crc kubenswrapper[4933]: W1201 09:44:11.382747 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3ce2443_f386_4203_8c92_f0961122fb6b.slice/crio-5deb9f45cefffb9bfdd11f642c38f58779686e2ed32920ad0ae4bba7b04984bd WatchSource:0}: Error finding container 5deb9f45cefffb9bfdd11f642c38f58779686e2ed32920ad0ae4bba7b04984bd: Status 404 returned error can't find the container with id 5deb9f45cefffb9bfdd11f642c38f58779686e2ed32920ad0ae4bba7b04984bd Dec 01 09:44:11 crc kubenswrapper[4933]: I1201 09:44:11.806346 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l6tck" event={"ID":"c3ce2443-f386-4203-8c92-f0961122fb6b","Type":"ContainerStarted","Data":"5deb9f45cefffb9bfdd11f642c38f58779686e2ed32920ad0ae4bba7b04984bd"} Dec 01 09:44:14 crc kubenswrapper[4933]: I1201 09:44:14.827236 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l6tck" event={"ID":"c3ce2443-f386-4203-8c92-f0961122fb6b","Type":"ContainerStarted","Data":"e0f5ea70cad0831ac8cf1004e7589902d4f99201d506d6149485449710046ef3"} Dec 01 09:44:14 crc kubenswrapper[4933]: I1201 09:44:14.848203 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l6tck" podStartSLOduration=2.267992555 podStartE2EDuration="4.848179487s" podCreationTimestamp="2025-12-01 09:44:10 +0000 UTC" firstStartedPulling="2025-12-01 09:44:11.384733065 +0000 UTC m=+742.026456680" lastFinishedPulling="2025-12-01 09:44:13.964919997 +0000 UTC m=+744.606643612" observedRunningTime="2025-12-01 09:44:14.844614669 +0000 UTC m=+745.486338284" watchObservedRunningTime="2025-12-01 09:44:14.848179487 +0000 UTC m=+745.489903102" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.511547 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-kqnxq"] Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.513351 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-kqnxq" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.515688 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xzbr7" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.527176 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8"] Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.528048 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.530562 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.550904 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8"] Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.557738 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-sn9zt"] Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.558756 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.592935 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-kqnxq"] Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.661944 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b95afca6-9346-403c-b1ab-d04d36537c40-ovs-socket\") pod \"nmstate-handler-sn9zt\" (UID: \"b95afca6-9346-403c-b1ab-d04d36537c40\") " pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.662044 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvgl2\" (UniqueName: \"kubernetes.io/projected/18b298e1-b8df-4272-a30c-496424de8d76-kube-api-access-gvgl2\") pod \"nmstate-metrics-7f946cbc9-kqnxq\" (UID: \"18b298e1-b8df-4272-a30c-496424de8d76\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-kqnxq" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.662069 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b95afca6-9346-403c-b1ab-d04d36537c40-nmstate-lock\") pod \"nmstate-handler-sn9zt\" (UID: \"b95afca6-9346-403c-b1ab-d04d36537c40\") " pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.662084 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ccec4c34-0e07-4b0a-a36a-a6bc46982fa6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8bmq8\" (UID: \"ccec4c34-0e07-4b0a-a36a-a6bc46982fa6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.662171 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmlfj\" (UniqueName: \"kubernetes.io/projected/b95afca6-9346-403c-b1ab-d04d36537c40-kube-api-access-dmlfj\") pod \"nmstate-handler-sn9zt\" (UID: \"b95afca6-9346-403c-b1ab-d04d36537c40\") " pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.662237 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk6n4\" (UniqueName: \"kubernetes.io/projected/ccec4c34-0e07-4b0a-a36a-a6bc46982fa6-kube-api-access-xk6n4\") pod \"nmstate-webhook-5f6d4c5ccb-8bmq8\" (UID: \"ccec4c34-0e07-4b0a-a36a-a6bc46982fa6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.662265 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b95afca6-9346-403c-b1ab-d04d36537c40-dbus-socket\") pod \"nmstate-handler-sn9zt\" (UID: \"b95afca6-9346-403c-b1ab-d04d36537c40\") " pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.676297 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4"] Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.677413 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.679591 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.679764 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-7d87x" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.679963 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.694561 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4"] Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.763224 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvgl2\" (UniqueName: \"kubernetes.io/projected/18b298e1-b8df-4272-a30c-496424de8d76-kube-api-access-gvgl2\") pod \"nmstate-metrics-7f946cbc9-kqnxq\" (UID: \"18b298e1-b8df-4272-a30c-496424de8d76\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-kqnxq" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.763270 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ccec4c34-0e07-4b0a-a36a-a6bc46982fa6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8bmq8\" (UID: \"ccec4c34-0e07-4b0a-a36a-a6bc46982fa6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.763291 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b95afca6-9346-403c-b1ab-d04d36537c40-nmstate-lock\") pod \"nmstate-handler-sn9zt\" (UID: \"b95afca6-9346-403c-b1ab-d04d36537c40\") " pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.763337 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmlfj\" (UniqueName: \"kubernetes.io/projected/b95afca6-9346-403c-b1ab-d04d36537c40-kube-api-access-dmlfj\") pod \"nmstate-handler-sn9zt\" (UID: \"b95afca6-9346-403c-b1ab-d04d36537c40\") " pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.763387 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk6n4\" (UniqueName: \"kubernetes.io/projected/ccec4c34-0e07-4b0a-a36a-a6bc46982fa6-kube-api-access-xk6n4\") pod \"nmstate-webhook-5f6d4c5ccb-8bmq8\" (UID: \"ccec4c34-0e07-4b0a-a36a-a6bc46982fa6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.763419 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b95afca6-9346-403c-b1ab-d04d36537c40-dbus-socket\") pod \"nmstate-handler-sn9zt\" (UID: \"b95afca6-9346-403c-b1ab-d04d36537c40\") " pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.763444 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b95afca6-9346-403c-b1ab-d04d36537c40-ovs-socket\") pod \"nmstate-handler-sn9zt\" (UID: \"b95afca6-9346-403c-b1ab-d04d36537c40\") " pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.763448 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b95afca6-9346-403c-b1ab-d04d36537c40-nmstate-lock\") pod \"nmstate-handler-sn9zt\" (UID: \"b95afca6-9346-403c-b1ab-d04d36537c40\") " pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.763587 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b95afca6-9346-403c-b1ab-d04d36537c40-ovs-socket\") pod \"nmstate-handler-sn9zt\" (UID: \"b95afca6-9346-403c-b1ab-d04d36537c40\") " pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.763818 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b95afca6-9346-403c-b1ab-d04d36537c40-dbus-socket\") pod \"nmstate-handler-sn9zt\" (UID: \"b95afca6-9346-403c-b1ab-d04d36537c40\") " pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.771783 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ccec4c34-0e07-4b0a-a36a-a6bc46982fa6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8bmq8\" (UID: \"ccec4c34-0e07-4b0a-a36a-a6bc46982fa6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.781577 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmlfj\" (UniqueName: \"kubernetes.io/projected/b95afca6-9346-403c-b1ab-d04d36537c40-kube-api-access-dmlfj\") pod \"nmstate-handler-sn9zt\" (UID: \"b95afca6-9346-403c-b1ab-d04d36537c40\") " pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.781667 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk6n4\" (UniqueName: \"kubernetes.io/projected/ccec4c34-0e07-4b0a-a36a-a6bc46982fa6-kube-api-access-xk6n4\") pod \"nmstate-webhook-5f6d4c5ccb-8bmq8\" (UID: \"ccec4c34-0e07-4b0a-a36a-a6bc46982fa6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.784106 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvgl2\" (UniqueName: \"kubernetes.io/projected/18b298e1-b8df-4272-a30c-496424de8d76-kube-api-access-gvgl2\") pod \"nmstate-metrics-7f946cbc9-kqnxq\" (UID: \"18b298e1-b8df-4272-a30c-496424de8d76\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-kqnxq" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.834985 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-kqnxq" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.846178 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.865181 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/575c5872-9ab6-4a15-86b7-9dfbe33c0171-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-vh9f4\" (UID: \"575c5872-9ab6-4a15-86b7-9dfbe33c0171\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.865339 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/575c5872-9ab6-4a15-86b7-9dfbe33c0171-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-vh9f4\" (UID: \"575c5872-9ab6-4a15-86b7-9dfbe33c0171\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.865410 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgz6x\" (UniqueName: \"kubernetes.io/projected/575c5872-9ab6-4a15-86b7-9dfbe33c0171-kube-api-access-qgz6x\") pod \"nmstate-console-plugin-7fbb5f6569-vh9f4\" (UID: \"575c5872-9ab6-4a15-86b7-9dfbe33c0171\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.876877 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.965846 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5755ff94dd-48brc"] Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.967012 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.967123 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/575c5872-9ab6-4a15-86b7-9dfbe33c0171-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-vh9f4\" (UID: \"575c5872-9ab6-4a15-86b7-9dfbe33c0171\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.967193 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgz6x\" (UniqueName: \"kubernetes.io/projected/575c5872-9ab6-4a15-86b7-9dfbe33c0171-kube-api-access-qgz6x\") pod \"nmstate-console-plugin-7fbb5f6569-vh9f4\" (UID: \"575c5872-9ab6-4a15-86b7-9dfbe33c0171\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.967224 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/575c5872-9ab6-4a15-86b7-9dfbe33c0171-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-vh9f4\" (UID: \"575c5872-9ab6-4a15-86b7-9dfbe33c0171\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.968542 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/575c5872-9ab6-4a15-86b7-9dfbe33c0171-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-vh9f4\" (UID: \"575c5872-9ab6-4a15-86b7-9dfbe33c0171\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.973296 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/575c5872-9ab6-4a15-86b7-9dfbe33c0171-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-vh9f4\" (UID: \"575c5872-9ab6-4a15-86b7-9dfbe33c0171\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" Dec 01 09:44:19 crc kubenswrapper[4933]: I1201 09:44:19.988904 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5755ff94dd-48brc"] Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.000535 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgz6x\" (UniqueName: \"kubernetes.io/projected/575c5872-9ab6-4a15-86b7-9dfbe33c0171-kube-api-access-qgz6x\") pod \"nmstate-console-plugin-7fbb5f6569-vh9f4\" (UID: \"575c5872-9ab6-4a15-86b7-9dfbe33c0171\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.069940 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/36f06509-ae87-4388-b321-e5e39e3b429f-service-ca\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.070515 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/36f06509-ae87-4388-b321-e5e39e3b429f-oauth-serving-cert\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.070565 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/36f06509-ae87-4388-b321-e5e39e3b429f-console-serving-cert\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.070601 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/36f06509-ae87-4388-b321-e5e39e3b429f-console-config\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.070650 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/36f06509-ae87-4388-b321-e5e39e3b429f-console-oauth-config\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.070704 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36f06509-ae87-4388-b321-e5e39e3b429f-trusted-ca-bundle\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.070729 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntt8s\" (UniqueName: \"kubernetes.io/projected/36f06509-ae87-4388-b321-e5e39e3b429f-kube-api-access-ntt8s\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.172453 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/36f06509-ae87-4388-b321-e5e39e3b429f-console-config\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.172518 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/36f06509-ae87-4388-b321-e5e39e3b429f-console-oauth-config\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.172555 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36f06509-ae87-4388-b321-e5e39e3b429f-trusted-ca-bundle\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.172610 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntt8s\" (UniqueName: \"kubernetes.io/projected/36f06509-ae87-4388-b321-e5e39e3b429f-kube-api-access-ntt8s\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.172628 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/36f06509-ae87-4388-b321-e5e39e3b429f-service-ca\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.172668 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/36f06509-ae87-4388-b321-e5e39e3b429f-oauth-serving-cert\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.172697 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/36f06509-ae87-4388-b321-e5e39e3b429f-console-serving-cert\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.173669 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/36f06509-ae87-4388-b321-e5e39e3b429f-console-config\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.174034 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/36f06509-ae87-4388-b321-e5e39e3b429f-service-ca\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.174177 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/36f06509-ae87-4388-b321-e5e39e3b429f-oauth-serving-cert\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.174496 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36f06509-ae87-4388-b321-e5e39e3b429f-trusted-ca-bundle\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.177983 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/36f06509-ae87-4388-b321-e5e39e3b429f-console-serving-cert\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.178706 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/36f06509-ae87-4388-b321-e5e39e3b429f-console-oauth-config\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.180026 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-kqnxq"] Dec 01 09:44:20 crc kubenswrapper[4933]: W1201 09:44:20.191147 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18b298e1_b8df_4272_a30c_496424de8d76.slice/crio-fa1f5f30cd0793a1a383233dd65ad77de52d242ee051e3a900db3d286a5e4eba WatchSource:0}: Error finding container fa1f5f30cd0793a1a383233dd65ad77de52d242ee051e3a900db3d286a5e4eba: Status 404 returned error can't find the container with id fa1f5f30cd0793a1a383233dd65ad77de52d242ee051e3a900db3d286a5e4eba Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.192209 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntt8s\" (UniqueName: \"kubernetes.io/projected/36f06509-ae87-4388-b321-e5e39e3b429f-kube-api-access-ntt8s\") pod \"console-5755ff94dd-48brc\" (UID: \"36f06509-ae87-4388-b321-e5e39e3b429f\") " pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.231182 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8"] Dec 01 09:44:20 crc kubenswrapper[4933]: W1201 09:44:20.235083 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccec4c34_0e07_4b0a_a36a_a6bc46982fa6.slice/crio-c57d556d6958568bf95d2994fd39548674a02e70d79a3fe9bbec42164267e093 WatchSource:0}: Error finding container c57d556d6958568bf95d2994fd39548674a02e70d79a3fe9bbec42164267e093: Status 404 returned error can't find the container with id c57d556d6958568bf95d2994fd39548674a02e70d79a3fe9bbec42164267e093 Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.293648 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.308063 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.832574 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4"] Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.901282 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sn9zt" event={"ID":"b95afca6-9346-403c-b1ab-d04d36537c40","Type":"ContainerStarted","Data":"f4b5995c4662f9bbf3a67ed712300d46cc8cd506158d38e3c2c3cf8571ae5f4b"} Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.908103 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" event={"ID":"575c5872-9ab6-4a15-86b7-9dfbe33c0171","Type":"ContainerStarted","Data":"cf2cd8de59dd0937508835b3f7b3f1b532d995b7a4eb5aa6816ad243c8c86423"} Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.914562 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8" event={"ID":"ccec4c34-0e07-4b0a-a36a-a6bc46982fa6","Type":"ContainerStarted","Data":"c57d556d6958568bf95d2994fd39548674a02e70d79a3fe9bbec42164267e093"} Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.917564 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-kqnxq" event={"ID":"18b298e1-b8df-4272-a30c-496424de8d76","Type":"ContainerStarted","Data":"fa1f5f30cd0793a1a383233dd65ad77de52d242ee051e3a900db3d286a5e4eba"} Dec 01 09:44:20 crc kubenswrapper[4933]: I1201 09:44:20.977109 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5755ff94dd-48brc"] Dec 01 09:44:20 crc kubenswrapper[4933]: W1201 09:44:20.982488 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36f06509_ae87_4388_b321_e5e39e3b429f.slice/crio-fa7c7c711fc3599ea8d9c1d2782bfc3c07d52fd98378f3a1164dc33a65874b17 WatchSource:0}: Error finding container fa7c7c711fc3599ea8d9c1d2782bfc3c07d52fd98378f3a1164dc33a65874b17: Status 404 returned error can't find the container with id fa7c7c711fc3599ea8d9c1d2782bfc3c07d52fd98378f3a1164dc33a65874b17 Dec 01 09:44:21 crc kubenswrapper[4933]: I1201 09:44:21.532528 4933 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 09:44:21 crc kubenswrapper[4933]: I1201 09:44:21.927525 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5755ff94dd-48brc" event={"ID":"36f06509-ae87-4388-b321-e5e39e3b429f","Type":"ContainerStarted","Data":"7be82d5fb14b49f60fea9eb5b0ade9900aed5a2df31193c2c544ba2f3cd11027"} Dec 01 09:44:21 crc kubenswrapper[4933]: I1201 09:44:21.927573 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5755ff94dd-48brc" event={"ID":"36f06509-ae87-4388-b321-e5e39e3b429f","Type":"ContainerStarted","Data":"fa7c7c711fc3599ea8d9c1d2782bfc3c07d52fd98378f3a1164dc33a65874b17"} Dec 01 09:44:25 crc kubenswrapper[4933]: I1201 09:44:25.960418 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sn9zt" event={"ID":"b95afca6-9346-403c-b1ab-d04d36537c40","Type":"ContainerStarted","Data":"231ea392e8b19884af1e91fdf77021cc9eb01c93dfd72b4d3e461345c20de5dc"} Dec 01 09:44:25 crc kubenswrapper[4933]: I1201 09:44:25.961295 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:25 crc kubenswrapper[4933]: I1201 09:44:25.963674 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" event={"ID":"575c5872-9ab6-4a15-86b7-9dfbe33c0171","Type":"ContainerStarted","Data":"ac84d397a09d0be2247a9d6046dad8581338b060437bd8ae9f4427c3c20a4bda"} Dec 01 09:44:25 crc kubenswrapper[4933]: I1201 09:44:25.965055 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8" event={"ID":"ccec4c34-0e07-4b0a-a36a-a6bc46982fa6","Type":"ContainerStarted","Data":"8460ca997359f219a96dd81208baf9249acbf62c453661d9baa5ddca4c9e0e7a"} Dec 01 09:44:25 crc kubenswrapper[4933]: I1201 09:44:25.965334 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8" Dec 01 09:44:25 crc kubenswrapper[4933]: I1201 09:44:25.967382 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-kqnxq" event={"ID":"18b298e1-b8df-4272-a30c-496424de8d76","Type":"ContainerStarted","Data":"5a96d3bf28e5ee7c9c5786812aa51d6ca8b2b5166a6df793db48b6ec99c24824"} Dec 01 09:44:26 crc kubenswrapper[4933]: I1201 09:44:26.010986 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5755ff94dd-48brc" podStartSLOduration=7.010951869 podStartE2EDuration="7.010951869s" podCreationTimestamp="2025-12-01 09:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:44:21.956702512 +0000 UTC m=+752.598426117" watchObservedRunningTime="2025-12-01 09:44:26.010951869 +0000 UTC m=+756.652675484" Dec 01 09:44:26 crc kubenswrapper[4933]: I1201 09:44:26.012117 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-sn9zt" podStartSLOduration=1.9284147310000002 podStartE2EDuration="7.012105808s" podCreationTimestamp="2025-12-01 09:44:19 +0000 UTC" firstStartedPulling="2025-12-01 09:44:19.925042276 +0000 UTC m=+750.566765881" lastFinishedPulling="2025-12-01 09:44:25.008733343 +0000 UTC m=+755.650456958" observedRunningTime="2025-12-01 09:44:26.009846672 +0000 UTC m=+756.651570287" watchObservedRunningTime="2025-12-01 09:44:26.012105808 +0000 UTC m=+756.653829423" Dec 01 09:44:26 crc kubenswrapper[4933]: I1201 09:44:26.029378 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vh9f4" podStartSLOduration=2.8693494189999997 podStartE2EDuration="7.029357375s" podCreationTimestamp="2025-12-01 09:44:19 +0000 UTC" firstStartedPulling="2025-12-01 09:44:20.847822334 +0000 UTC m=+751.489545949" lastFinishedPulling="2025-12-01 09:44:25.00783028 +0000 UTC m=+755.649553905" observedRunningTime="2025-12-01 09:44:26.028047093 +0000 UTC m=+756.669770708" watchObservedRunningTime="2025-12-01 09:44:26.029357375 +0000 UTC m=+756.671080990" Dec 01 09:44:26 crc kubenswrapper[4933]: I1201 09:44:26.042918 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8" podStartSLOduration=2.2519250250000002 podStartE2EDuration="7.042897521s" podCreationTimestamp="2025-12-01 09:44:19 +0000 UTC" firstStartedPulling="2025-12-01 09:44:20.237690581 +0000 UTC m=+750.879414196" lastFinishedPulling="2025-12-01 09:44:25.028663077 +0000 UTC m=+755.670386692" observedRunningTime="2025-12-01 09:44:26.042260904 +0000 UTC m=+756.683984519" watchObservedRunningTime="2025-12-01 09:44:26.042897521 +0000 UTC m=+756.684621136" Dec 01 09:44:28 crc kubenswrapper[4933]: I1201 09:44:28.992146 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-kqnxq" event={"ID":"18b298e1-b8df-4272-a30c-496424de8d76","Type":"ContainerStarted","Data":"23b312666e40bbfd8cf36138f5ed8375616eab02680f3541689601b6a7acc682"} Dec 01 09:44:29 crc kubenswrapper[4933]: I1201 09:44:29.013200 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-kqnxq" podStartSLOduration=2.226076014 podStartE2EDuration="10.013174047s" podCreationTimestamp="2025-12-01 09:44:19 +0000 UTC" firstStartedPulling="2025-12-01 09:44:20.193154127 +0000 UTC m=+750.834877742" lastFinishedPulling="2025-12-01 09:44:27.98025217 +0000 UTC m=+758.621975775" observedRunningTime="2025-12-01 09:44:29.008753538 +0000 UTC m=+759.650477153" watchObservedRunningTime="2025-12-01 09:44:29.013174047 +0000 UTC m=+759.654897662" Dec 01 09:44:30 crc kubenswrapper[4933]: I1201 09:44:30.308326 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:30 crc kubenswrapper[4933]: I1201 09:44:30.308521 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:30 crc kubenswrapper[4933]: I1201 09:44:30.314504 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:31 crc kubenswrapper[4933]: I1201 09:44:31.010453 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5755ff94dd-48brc" Dec 01 09:44:31 crc kubenswrapper[4933]: I1201 09:44:31.077188 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x74qn"] Dec 01 09:44:34 crc kubenswrapper[4933]: I1201 09:44:34.902992 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-sn9zt" Dec 01 09:44:39 crc kubenswrapper[4933]: I1201 09:44:39.852861 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8bmq8" Dec 01 09:44:41 crc kubenswrapper[4933]: I1201 09:44:41.741061 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:44:41 crc kubenswrapper[4933]: I1201 09:44:41.741157 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.594760 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4"] Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.596821 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.599045 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.608536 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4"] Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.645669 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t85s\" (UniqueName: \"kubernetes.io/projected/dccc76f6-8c6b-4e59-b7a1-0b0892183838-kube-api-access-4t85s\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4\" (UID: \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.645797 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dccc76f6-8c6b-4e59-b7a1-0b0892183838-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4\" (UID: \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.646049 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dccc76f6-8c6b-4e59-b7a1-0b0892183838-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4\" (UID: \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.747469 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dccc76f6-8c6b-4e59-b7a1-0b0892183838-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4\" (UID: \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.747542 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t85s\" (UniqueName: \"kubernetes.io/projected/dccc76f6-8c6b-4e59-b7a1-0b0892183838-kube-api-access-4t85s\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4\" (UID: \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.747598 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dccc76f6-8c6b-4e59-b7a1-0b0892183838-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4\" (UID: \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.748164 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dccc76f6-8c6b-4e59-b7a1-0b0892183838-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4\" (UID: \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.748162 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dccc76f6-8c6b-4e59-b7a1-0b0892183838-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4\" (UID: \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.770677 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t85s\" (UniqueName: \"kubernetes.io/projected/dccc76f6-8c6b-4e59-b7a1-0b0892183838-kube-api-access-4t85s\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4\" (UID: \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" Dec 01 09:44:55 crc kubenswrapper[4933]: I1201 09:44:55.916440 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" Dec 01 09:44:56 crc kubenswrapper[4933]: I1201 09:44:56.153569 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-x74qn" podUID="45bbe65f-8e73-4b73-863c-15db667e3e22" containerName="console" containerID="cri-o://02e468c3934f305934ecfa5e3d9730f9d52ab769b5b6e957d8e581fcdb65c26e" gracePeriod=15 Dec 01 09:44:56 crc kubenswrapper[4933]: I1201 09:44:56.548384 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4"] Dec 01 09:44:57 crc kubenswrapper[4933]: I1201 09:44:57.202427 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" event={"ID":"dccc76f6-8c6b-4e59-b7a1-0b0892183838","Type":"ContainerStarted","Data":"035c7e4ec9c1c782b373b98ac5dffb365c50744803b86710ab0551c0bfce9e36"} Dec 01 09:44:57 crc kubenswrapper[4933]: I1201 09:44:57.202899 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" event={"ID":"dccc76f6-8c6b-4e59-b7a1-0b0892183838","Type":"ContainerStarted","Data":"49118ac760c3f4ed9137fcdf942a7933ff2f9ae997fa089fe07fe05c0cab9b1b"} Dec 01 09:44:57 crc kubenswrapper[4933]: I1201 09:44:57.205847 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x74qn_45bbe65f-8e73-4b73-863c-15db667e3e22/console/0.log" Dec 01 09:44:57 crc kubenswrapper[4933]: I1201 09:44:57.205924 4933 generic.go:334] "Generic (PLEG): container finished" podID="45bbe65f-8e73-4b73-863c-15db667e3e22" containerID="02e468c3934f305934ecfa5e3d9730f9d52ab769b5b6e957d8e581fcdb65c26e" exitCode=2 Dec 01 09:44:57 crc kubenswrapper[4933]: I1201 09:44:57.205982 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x74qn" event={"ID":"45bbe65f-8e73-4b73-863c-15db667e3e22","Type":"ContainerDied","Data":"02e468c3934f305934ecfa5e3d9730f9d52ab769b5b6e957d8e581fcdb65c26e"} Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.073449 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x74qn_45bbe65f-8e73-4b73-863c-15db667e3e22/console/0.log" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.073547 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.154931 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-service-ca\") pod \"45bbe65f-8e73-4b73-863c-15db667e3e22\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.155039 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-console-config\") pod \"45bbe65f-8e73-4b73-863c-15db667e3e22\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.155108 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bbe65f-8e73-4b73-863c-15db667e3e22-console-serving-cert\") pod \"45bbe65f-8e73-4b73-863c-15db667e3e22\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.155194 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/45bbe65f-8e73-4b73-863c-15db667e3e22-console-oauth-config\") pod \"45bbe65f-8e73-4b73-863c-15db667e3e22\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.155242 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh7t7\" (UniqueName: \"kubernetes.io/projected/45bbe65f-8e73-4b73-863c-15db667e3e22-kube-api-access-lh7t7\") pod \"45bbe65f-8e73-4b73-863c-15db667e3e22\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.155293 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-oauth-serving-cert\") pod \"45bbe65f-8e73-4b73-863c-15db667e3e22\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.155348 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-trusted-ca-bundle\") pod \"45bbe65f-8e73-4b73-863c-15db667e3e22\" (UID: \"45bbe65f-8e73-4b73-863c-15db667e3e22\") " Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.156121 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-service-ca" (OuterVolumeSpecName: "service-ca") pod "45bbe65f-8e73-4b73-863c-15db667e3e22" (UID: "45bbe65f-8e73-4b73-863c-15db667e3e22"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.156229 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-console-config" (OuterVolumeSpecName: "console-config") pod "45bbe65f-8e73-4b73-863c-15db667e3e22" (UID: "45bbe65f-8e73-4b73-863c-15db667e3e22"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.156747 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "45bbe65f-8e73-4b73-863c-15db667e3e22" (UID: "45bbe65f-8e73-4b73-863c-15db667e3e22"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.156873 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "45bbe65f-8e73-4b73-863c-15db667e3e22" (UID: "45bbe65f-8e73-4b73-863c-15db667e3e22"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.156997 4933 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.157022 4933 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-console-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.163075 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45bbe65f-8e73-4b73-863c-15db667e3e22-kube-api-access-lh7t7" (OuterVolumeSpecName: "kube-api-access-lh7t7") pod "45bbe65f-8e73-4b73-863c-15db667e3e22" (UID: "45bbe65f-8e73-4b73-863c-15db667e3e22"). InnerVolumeSpecName "kube-api-access-lh7t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.163187 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bbe65f-8e73-4b73-863c-15db667e3e22-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "45bbe65f-8e73-4b73-863c-15db667e3e22" (UID: "45bbe65f-8e73-4b73-863c-15db667e3e22"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.163834 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bbe65f-8e73-4b73-863c-15db667e3e22-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "45bbe65f-8e73-4b73-863c-15db667e3e22" (UID: "45bbe65f-8e73-4b73-863c-15db667e3e22"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.215499 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x74qn_45bbe65f-8e73-4b73-863c-15db667e3e22/console/0.log" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.215600 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x74qn" event={"ID":"45bbe65f-8e73-4b73-863c-15db667e3e22","Type":"ContainerDied","Data":"3b84cc9b2d9f4f7e63cde579d5adc5136c8909363ebf87ae239d6380a4d9f5b8"} Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.215646 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x74qn" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.215657 4933 scope.go:117] "RemoveContainer" containerID="02e468c3934f305934ecfa5e3d9730f9d52ab769b5b6e957d8e581fcdb65c26e" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.218867 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" event={"ID":"dccc76f6-8c6b-4e59-b7a1-0b0892183838","Type":"ContainerDied","Data":"035c7e4ec9c1c782b373b98ac5dffb365c50744803b86710ab0551c0bfce9e36"} Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.218717 4933 generic.go:334] "Generic (PLEG): container finished" podID="dccc76f6-8c6b-4e59-b7a1-0b0892183838" containerID="035c7e4ec9c1c782b373b98ac5dffb365c50744803b86710ab0551c0bfce9e36" exitCode=0 Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.258555 4933 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.258603 4933 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bbe65f-8e73-4b73-863c-15db667e3e22-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.258620 4933 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bbe65f-8e73-4b73-863c-15db667e3e22-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.258633 4933 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/45bbe65f-8e73-4b73-863c-15db667e3e22-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.258644 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh7t7\" (UniqueName: \"kubernetes.io/projected/45bbe65f-8e73-4b73-863c-15db667e3e22-kube-api-access-lh7t7\") on node \"crc\" DevicePath \"\"" Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.264138 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x74qn"] Dec 01 09:44:58 crc kubenswrapper[4933]: I1201 09:44:58.268666 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-x74qn"] Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.144171 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zqh54"] Dec 01 09:44:59 crc kubenswrapper[4933]: E1201 09:44:59.144616 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45bbe65f-8e73-4b73-863c-15db667e3e22" containerName="console" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.144645 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bbe65f-8e73-4b73-863c-15db667e3e22" containerName="console" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.144760 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="45bbe65f-8e73-4b73-863c-15db667e3e22" containerName="console" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.145720 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.159633 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqh54"] Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.274395 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7m7r\" (UniqueName: \"kubernetes.io/projected/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-kube-api-access-l7m7r\") pod \"redhat-operators-zqh54\" (UID: \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\") " pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.274498 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-catalog-content\") pod \"redhat-operators-zqh54\" (UID: \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\") " pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.274525 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-utilities\") pod \"redhat-operators-zqh54\" (UID: \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\") " pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.375859 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-catalog-content\") pod \"redhat-operators-zqh54\" (UID: \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\") " pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.375926 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-utilities\") pod \"redhat-operators-zqh54\" (UID: \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\") " pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.375994 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7m7r\" (UniqueName: \"kubernetes.io/projected/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-kube-api-access-l7m7r\") pod \"redhat-operators-zqh54\" (UID: \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\") " pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.376535 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-utilities\") pod \"redhat-operators-zqh54\" (UID: \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\") " pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.376739 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-catalog-content\") pod \"redhat-operators-zqh54\" (UID: \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\") " pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.397979 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7m7r\" (UniqueName: \"kubernetes.io/projected/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-kube-api-access-l7m7r\") pod \"redhat-operators-zqh54\" (UID: \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\") " pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.464184 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.682097 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45bbe65f-8e73-4b73-863c-15db667e3e22" path="/var/lib/kubelet/pods/45bbe65f-8e73-4b73-863c-15db667e3e22/volumes" Dec 01 09:44:59 crc kubenswrapper[4933]: I1201 09:44:59.705236 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqh54"] Dec 01 09:44:59 crc kubenswrapper[4933]: W1201 09:44:59.778069 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dad2eb_d8be_4c00_aa0f_1ec672e24458.slice/crio-f7e0ea3c897457c5ef66e063b3b7295cdeb88251617d85b8ae8a80d2adc830f5 WatchSource:0}: Error finding container f7e0ea3c897457c5ef66e063b3b7295cdeb88251617d85b8ae8a80d2adc830f5: Status 404 returned error can't find the container with id f7e0ea3c897457c5ef66e063b3b7295cdeb88251617d85b8ae8a80d2adc830f5 Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.183152 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd"] Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.184965 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.188610 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.189046 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.193976 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd"] Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.235472 4933 generic.go:334] "Generic (PLEG): container finished" podID="dccc76f6-8c6b-4e59-b7a1-0b0892183838" containerID="c5384215b15a66904d3606ff185be83ca26ca05f99f34131a0ca4830698de583" exitCode=0 Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.235595 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" event={"ID":"dccc76f6-8c6b-4e59-b7a1-0b0892183838","Type":"ContainerDied","Data":"c5384215b15a66904d3606ff185be83ca26ca05f99f34131a0ca4830698de583"} Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.238535 4933 generic.go:334] "Generic (PLEG): container finished" podID="f7dad2eb-d8be-4c00-aa0f-1ec672e24458" containerID="c4bd1c5f3271484fe59419a46a9494d4a9f75285a32a5765c5fe30b1536c431f" exitCode=0 Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.238586 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqh54" event={"ID":"f7dad2eb-d8be-4c00-aa0f-1ec672e24458","Type":"ContainerDied","Data":"c4bd1c5f3271484fe59419a46a9494d4a9f75285a32a5765c5fe30b1536c431f"} Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.238629 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqh54" event={"ID":"f7dad2eb-d8be-4c00-aa0f-1ec672e24458","Type":"ContainerStarted","Data":"f7e0ea3c897457c5ef66e063b3b7295cdeb88251617d85b8ae8a80d2adc830f5"} Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.295426 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a3113e5-0de6-4738-bedf-6650252f52ad-config-volume\") pod \"collect-profiles-29409705-774rd\" (UID: \"0a3113e5-0de6-4738-bedf-6650252f52ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.295474 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a3113e5-0de6-4738-bedf-6650252f52ad-secret-volume\") pod \"collect-profiles-29409705-774rd\" (UID: \"0a3113e5-0de6-4738-bedf-6650252f52ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.295525 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9sdr\" (UniqueName: \"kubernetes.io/projected/0a3113e5-0de6-4738-bedf-6650252f52ad-kube-api-access-m9sdr\") pod \"collect-profiles-29409705-774rd\" (UID: \"0a3113e5-0de6-4738-bedf-6650252f52ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.397094 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a3113e5-0de6-4738-bedf-6650252f52ad-config-volume\") pod \"collect-profiles-29409705-774rd\" (UID: \"0a3113e5-0de6-4738-bedf-6650252f52ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.397153 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a3113e5-0de6-4738-bedf-6650252f52ad-secret-volume\") pod \"collect-profiles-29409705-774rd\" (UID: \"0a3113e5-0de6-4738-bedf-6650252f52ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.397196 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9sdr\" (UniqueName: \"kubernetes.io/projected/0a3113e5-0de6-4738-bedf-6650252f52ad-kube-api-access-m9sdr\") pod \"collect-profiles-29409705-774rd\" (UID: \"0a3113e5-0de6-4738-bedf-6650252f52ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.399480 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a3113e5-0de6-4738-bedf-6650252f52ad-config-volume\") pod \"collect-profiles-29409705-774rd\" (UID: \"0a3113e5-0de6-4738-bedf-6650252f52ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.412487 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a3113e5-0de6-4738-bedf-6650252f52ad-secret-volume\") pod \"collect-profiles-29409705-774rd\" (UID: \"0a3113e5-0de6-4738-bedf-6650252f52ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.421264 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9sdr\" (UniqueName: \"kubernetes.io/projected/0a3113e5-0de6-4738-bedf-6650252f52ad-kube-api-access-m9sdr\") pod \"collect-profiles-29409705-774rd\" (UID: \"0a3113e5-0de6-4738-bedf-6650252f52ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.503757 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" Dec 01 09:45:00 crc kubenswrapper[4933]: I1201 09:45:00.740393 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd"] Dec 01 09:45:01 crc kubenswrapper[4933]: I1201 09:45:01.424720 4933 generic.go:334] "Generic (PLEG): container finished" podID="0a3113e5-0de6-4738-bedf-6650252f52ad" containerID="7d1ef5d4efb645300909768838bafe371032e09633e263bea5786720bc9d9405" exitCode=0 Dec 01 09:45:01 crc kubenswrapper[4933]: I1201 09:45:01.424837 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" event={"ID":"0a3113e5-0de6-4738-bedf-6650252f52ad","Type":"ContainerDied","Data":"7d1ef5d4efb645300909768838bafe371032e09633e263bea5786720bc9d9405"} Dec 01 09:45:01 crc kubenswrapper[4933]: I1201 09:45:01.425366 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" event={"ID":"0a3113e5-0de6-4738-bedf-6650252f52ad","Type":"ContainerStarted","Data":"cb650c32c6826371ca9a722e1837475a24a4772582c11f298d7c747fca3dd400"} Dec 01 09:45:01 crc kubenswrapper[4933]: I1201 09:45:01.433953 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqh54" event={"ID":"f7dad2eb-d8be-4c00-aa0f-1ec672e24458","Type":"ContainerStarted","Data":"08207e8a884be3a6f7ae02e9504ad5832e83f564afb38a5e90f60f120a0195d1"} Dec 01 09:45:01 crc kubenswrapper[4933]: I1201 09:45:01.436952 4933 generic.go:334] "Generic (PLEG): container finished" podID="dccc76f6-8c6b-4e59-b7a1-0b0892183838" containerID="910e7b2ed3d1c4035b2a44536a97c54bd873b9e1f8b02249d18e7139f6da31e4" exitCode=0 Dec 01 09:45:01 crc kubenswrapper[4933]: I1201 09:45:01.436989 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" event={"ID":"dccc76f6-8c6b-4e59-b7a1-0b0892183838","Type":"ContainerDied","Data":"910e7b2ed3d1c4035b2a44536a97c54bd873b9e1f8b02249d18e7139f6da31e4"} Dec 01 09:45:03 crc kubenswrapper[4933]: I1201 09:45:03.647958 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" Dec 01 09:45:03 crc kubenswrapper[4933]: I1201 09:45:03.799025 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t85s\" (UniqueName: \"kubernetes.io/projected/dccc76f6-8c6b-4e59-b7a1-0b0892183838-kube-api-access-4t85s\") pod \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\" (UID: \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\") " Dec 01 09:45:03 crc kubenswrapper[4933]: I1201 09:45:03.799116 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dccc76f6-8c6b-4e59-b7a1-0b0892183838-bundle\") pod \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\" (UID: \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\") " Dec 01 09:45:03 crc kubenswrapper[4933]: I1201 09:45:03.799149 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dccc76f6-8c6b-4e59-b7a1-0b0892183838-util\") pod \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\" (UID: \"dccc76f6-8c6b-4e59-b7a1-0b0892183838\") " Dec 01 09:45:03 crc kubenswrapper[4933]: I1201 09:45:03.801284 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dccc76f6-8c6b-4e59-b7a1-0b0892183838-bundle" (OuterVolumeSpecName: "bundle") pod "dccc76f6-8c6b-4e59-b7a1-0b0892183838" (UID: "dccc76f6-8c6b-4e59-b7a1-0b0892183838"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:45:03 crc kubenswrapper[4933]: I1201 09:45:03.810025 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dccc76f6-8c6b-4e59-b7a1-0b0892183838-util" (OuterVolumeSpecName: "util") pod "dccc76f6-8c6b-4e59-b7a1-0b0892183838" (UID: "dccc76f6-8c6b-4e59-b7a1-0b0892183838"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:45:03 crc kubenswrapper[4933]: I1201 09:45:03.864805 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dccc76f6-8c6b-4e59-b7a1-0b0892183838-kube-api-access-4t85s" (OuterVolumeSpecName: "kube-api-access-4t85s") pod "dccc76f6-8c6b-4e59-b7a1-0b0892183838" (UID: "dccc76f6-8c6b-4e59-b7a1-0b0892183838"). InnerVolumeSpecName "kube-api-access-4t85s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:45:03 crc kubenswrapper[4933]: I1201 09:45:03.900262 4933 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dccc76f6-8c6b-4e59-b7a1-0b0892183838-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:45:03 crc kubenswrapper[4933]: I1201 09:45:03.900386 4933 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dccc76f6-8c6b-4e59-b7a1-0b0892183838-util\") on node \"crc\" DevicePath \"\"" Dec 01 09:45:03 crc kubenswrapper[4933]: I1201 09:45:03.900397 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t85s\" (UniqueName: \"kubernetes.io/projected/dccc76f6-8c6b-4e59-b7a1-0b0892183838-kube-api-access-4t85s\") on node \"crc\" DevicePath \"\"" Dec 01 09:45:03 crc kubenswrapper[4933]: I1201 09:45:03.902889 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.103148 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9sdr\" (UniqueName: \"kubernetes.io/projected/0a3113e5-0de6-4738-bedf-6650252f52ad-kube-api-access-m9sdr\") pod \"0a3113e5-0de6-4738-bedf-6650252f52ad\" (UID: \"0a3113e5-0de6-4738-bedf-6650252f52ad\") " Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.103222 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a3113e5-0de6-4738-bedf-6650252f52ad-config-volume\") pod \"0a3113e5-0de6-4738-bedf-6650252f52ad\" (UID: \"0a3113e5-0de6-4738-bedf-6650252f52ad\") " Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.103441 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a3113e5-0de6-4738-bedf-6650252f52ad-secret-volume\") pod \"0a3113e5-0de6-4738-bedf-6650252f52ad\" (UID: \"0a3113e5-0de6-4738-bedf-6650252f52ad\") " Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.122796 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a3113e5-0de6-4738-bedf-6650252f52ad-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a3113e5-0de6-4738-bedf-6650252f52ad" (UID: "0a3113e5-0de6-4738-bedf-6650252f52ad"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.205244 4933 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a3113e5-0de6-4738-bedf-6650252f52ad-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.209171 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a3113e5-0de6-4738-bedf-6650252f52ad-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a3113e5-0de6-4738-bedf-6650252f52ad" (UID: "0a3113e5-0de6-4738-bedf-6650252f52ad"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.209653 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a3113e5-0de6-4738-bedf-6650252f52ad-kube-api-access-m9sdr" (OuterVolumeSpecName: "kube-api-access-m9sdr") pod "0a3113e5-0de6-4738-bedf-6650252f52ad" (UID: "0a3113e5-0de6-4738-bedf-6650252f52ad"). InnerVolumeSpecName "kube-api-access-m9sdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.306678 4933 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a3113e5-0de6-4738-bedf-6650252f52ad-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.306742 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9sdr\" (UniqueName: \"kubernetes.io/projected/0a3113e5-0de6-4738-bedf-6650252f52ad-kube-api-access-m9sdr\") on node \"crc\" DevicePath \"\"" Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.604901 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" event={"ID":"0a3113e5-0de6-4738-bedf-6650252f52ad","Type":"ContainerDied","Data":"cb650c32c6826371ca9a722e1837475a24a4772582c11f298d7c747fca3dd400"} Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.604945 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb650c32c6826371ca9a722e1837475a24a4772582c11f298d7c747fca3dd400" Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.605010 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd" Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.623341 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" event={"ID":"dccc76f6-8c6b-4e59-b7a1-0b0892183838","Type":"ContainerDied","Data":"49118ac760c3f4ed9137fcdf942a7933ff2f9ae997fa089fe07fe05c0cab9b1b"} Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.623416 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49118ac760c3f4ed9137fcdf942a7933ff2f9ae997fa089fe07fe05c0cab9b1b" Dec 01 09:45:04 crc kubenswrapper[4933]: I1201 09:45:04.623556 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4" Dec 01 09:45:07 crc kubenswrapper[4933]: I1201 09:45:07.697384 4933 generic.go:334] "Generic (PLEG): container finished" podID="f7dad2eb-d8be-4c00-aa0f-1ec672e24458" containerID="08207e8a884be3a6f7ae02e9504ad5832e83f564afb38a5e90f60f120a0195d1" exitCode=0 Dec 01 09:45:07 crc kubenswrapper[4933]: I1201 09:45:07.697493 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqh54" event={"ID":"f7dad2eb-d8be-4c00-aa0f-1ec672e24458","Type":"ContainerDied","Data":"08207e8a884be3a6f7ae02e9504ad5832e83f564afb38a5e90f60f120a0195d1"} Dec 01 09:45:09 crc kubenswrapper[4933]: I1201 09:45:09.776081 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqh54" event={"ID":"f7dad2eb-d8be-4c00-aa0f-1ec672e24458","Type":"ContainerStarted","Data":"60698cd638626e49cbf485151caa4f0b750390053a553b70d268ce87be134821"} Dec 01 09:45:10 crc kubenswrapper[4933]: I1201 09:45:10.046722 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zqh54" podStartSLOduration=2.810675871 podStartE2EDuration="11.046701136s" podCreationTimestamp="2025-12-01 09:44:59 +0000 UTC" firstStartedPulling="2025-12-01 09:45:00.24021158 +0000 UTC m=+790.881935195" lastFinishedPulling="2025-12-01 09:45:08.476236845 +0000 UTC m=+799.117960460" observedRunningTime="2025-12-01 09:45:10.041676251 +0000 UTC m=+800.683399866" watchObservedRunningTime="2025-12-01 09:45:10.046701136 +0000 UTC m=+800.688424761" Dec 01 09:45:11 crc kubenswrapper[4933]: I1201 09:45:11.741155 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:45:11 crc kubenswrapper[4933]: I1201 09:45:11.741648 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.220941 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw"] Dec 01 09:45:13 crc kubenswrapper[4933]: E1201 09:45:13.221275 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccc76f6-8c6b-4e59-b7a1-0b0892183838" containerName="extract" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.221299 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccc76f6-8c6b-4e59-b7a1-0b0892183838" containerName="extract" Dec 01 09:45:13 crc kubenswrapper[4933]: E1201 09:45:13.221329 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccc76f6-8c6b-4e59-b7a1-0b0892183838" containerName="util" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.221335 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccc76f6-8c6b-4e59-b7a1-0b0892183838" containerName="util" Dec 01 09:45:13 crc kubenswrapper[4933]: E1201 09:45:13.221342 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccc76f6-8c6b-4e59-b7a1-0b0892183838" containerName="pull" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.221348 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccc76f6-8c6b-4e59-b7a1-0b0892183838" containerName="pull" Dec 01 09:45:13 crc kubenswrapper[4933]: E1201 09:45:13.221357 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a3113e5-0de6-4738-bedf-6650252f52ad" containerName="collect-profiles" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.221363 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a3113e5-0de6-4738-bedf-6650252f52ad" containerName="collect-profiles" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.221503 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a3113e5-0de6-4738-bedf-6650252f52ad" containerName="collect-profiles" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.221520 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="dccc76f6-8c6b-4e59-b7a1-0b0892183838" containerName="extract" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.222045 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.226262 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.228353 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.228396 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.229164 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qt5rc" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.229202 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.246123 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw"] Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.424347 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c2e1948-244d-4059-9f94-0675dfaa751f-webhook-cert\") pod \"metallb-operator-controller-manager-7d44559b9d-mg6pw\" (UID: \"7c2e1948-244d-4059-9f94-0675dfaa751f\") " pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.424486 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c2e1948-244d-4059-9f94-0675dfaa751f-apiservice-cert\") pod \"metallb-operator-controller-manager-7d44559b9d-mg6pw\" (UID: \"7c2e1948-244d-4059-9f94-0675dfaa751f\") " pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.424534 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxgbk\" (UniqueName: \"kubernetes.io/projected/7c2e1948-244d-4059-9f94-0675dfaa751f-kube-api-access-lxgbk\") pod \"metallb-operator-controller-manager-7d44559b9d-mg6pw\" (UID: \"7c2e1948-244d-4059-9f94-0675dfaa751f\") " pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.525968 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c2e1948-244d-4059-9f94-0675dfaa751f-apiservice-cert\") pod \"metallb-operator-controller-manager-7d44559b9d-mg6pw\" (UID: \"7c2e1948-244d-4059-9f94-0675dfaa751f\") " pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.526058 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxgbk\" (UniqueName: \"kubernetes.io/projected/7c2e1948-244d-4059-9f94-0675dfaa751f-kube-api-access-lxgbk\") pod \"metallb-operator-controller-manager-7d44559b9d-mg6pw\" (UID: \"7c2e1948-244d-4059-9f94-0675dfaa751f\") " pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.526109 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c2e1948-244d-4059-9f94-0675dfaa751f-webhook-cert\") pod \"metallb-operator-controller-manager-7d44559b9d-mg6pw\" (UID: \"7c2e1948-244d-4059-9f94-0675dfaa751f\") " pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.536285 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c2e1948-244d-4059-9f94-0675dfaa751f-webhook-cert\") pod \"metallb-operator-controller-manager-7d44559b9d-mg6pw\" (UID: \"7c2e1948-244d-4059-9f94-0675dfaa751f\") " pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.536706 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c2e1948-244d-4059-9f94-0675dfaa751f-apiservice-cert\") pod \"metallb-operator-controller-manager-7d44559b9d-mg6pw\" (UID: \"7c2e1948-244d-4059-9f94-0675dfaa751f\") " pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.545236 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxgbk\" (UniqueName: \"kubernetes.io/projected/7c2e1948-244d-4059-9f94-0675dfaa751f-kube-api-access-lxgbk\") pod \"metallb-operator-controller-manager-7d44559b9d-mg6pw\" (UID: \"7c2e1948-244d-4059-9f94-0675dfaa751f\") " pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.592695 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj"] Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.593841 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.596125 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.597043 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.597445 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7797b" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.614101 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj"] Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.627538 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db30f3a8-b953-4818-8999-c247744b8c1a-apiservice-cert\") pod \"metallb-operator-webhook-server-64d64d5bf5-hc5zj\" (UID: \"db30f3a8-b953-4818-8999-c247744b8c1a\") " pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.627627 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db30f3a8-b953-4818-8999-c247744b8c1a-webhook-cert\") pod \"metallb-operator-webhook-server-64d64d5bf5-hc5zj\" (UID: \"db30f3a8-b953-4818-8999-c247744b8c1a\") " pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.627664 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5ns8\" (UniqueName: \"kubernetes.io/projected/db30f3a8-b953-4818-8999-c247744b8c1a-kube-api-access-j5ns8\") pod \"metallb-operator-webhook-server-64d64d5bf5-hc5zj\" (UID: \"db30f3a8-b953-4818-8999-c247744b8c1a\") " pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.728928 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db30f3a8-b953-4818-8999-c247744b8c1a-apiservice-cert\") pod \"metallb-operator-webhook-server-64d64d5bf5-hc5zj\" (UID: \"db30f3a8-b953-4818-8999-c247744b8c1a\") " pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.728996 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db30f3a8-b953-4818-8999-c247744b8c1a-webhook-cert\") pod \"metallb-operator-webhook-server-64d64d5bf5-hc5zj\" (UID: \"db30f3a8-b953-4818-8999-c247744b8c1a\") " pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.729027 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5ns8\" (UniqueName: \"kubernetes.io/projected/db30f3a8-b953-4818-8999-c247744b8c1a-kube-api-access-j5ns8\") pod \"metallb-operator-webhook-server-64d64d5bf5-hc5zj\" (UID: \"db30f3a8-b953-4818-8999-c247744b8c1a\") " pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.735129 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db30f3a8-b953-4818-8999-c247744b8c1a-apiservice-cert\") pod \"metallb-operator-webhook-server-64d64d5bf5-hc5zj\" (UID: \"db30f3a8-b953-4818-8999-c247744b8c1a\") " pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.752972 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5ns8\" (UniqueName: \"kubernetes.io/projected/db30f3a8-b953-4818-8999-c247744b8c1a-kube-api-access-j5ns8\") pod \"metallb-operator-webhook-server-64d64d5bf5-hc5zj\" (UID: \"db30f3a8-b953-4818-8999-c247744b8c1a\") " pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.753669 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db30f3a8-b953-4818-8999-c247744b8c1a-webhook-cert\") pod \"metallb-operator-webhook-server-64d64d5bf5-hc5zj\" (UID: \"db30f3a8-b953-4818-8999-c247744b8c1a\") " pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.839998 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" Dec 01 09:45:13 crc kubenswrapper[4933]: I1201 09:45:13.918368 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" Dec 01 09:45:14 crc kubenswrapper[4933]: I1201 09:45:14.838387 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw"] Dec 01 09:45:14 crc kubenswrapper[4933]: W1201 09:45:14.851440 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c2e1948_244d_4059_9f94_0675dfaa751f.slice/crio-e4f2f604dcd93df9cc0a668d6dd285aadb6e0e137b79b568b387c56cab640554 WatchSource:0}: Error finding container e4f2f604dcd93df9cc0a668d6dd285aadb6e0e137b79b568b387c56cab640554: Status 404 returned error can't find the container with id e4f2f604dcd93df9cc0a668d6dd285aadb6e0e137b79b568b387c56cab640554 Dec 01 09:45:14 crc kubenswrapper[4933]: I1201 09:45:14.874289 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj"] Dec 01 09:45:14 crc kubenswrapper[4933]: W1201 09:45:14.884801 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb30f3a8_b953_4818_8999_c247744b8c1a.slice/crio-fc118b39f63decd4a31464583d8221d3e8a0dd446ae36af208a52fe06635fac1 WatchSource:0}: Error finding container fc118b39f63decd4a31464583d8221d3e8a0dd446ae36af208a52fe06635fac1: Status 404 returned error can't find the container with id fc118b39f63decd4a31464583d8221d3e8a0dd446ae36af208a52fe06635fac1 Dec 01 09:45:15 crc kubenswrapper[4933]: I1201 09:45:15.818402 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" event={"ID":"7c2e1948-244d-4059-9f94-0675dfaa751f","Type":"ContainerStarted","Data":"e4f2f604dcd93df9cc0a668d6dd285aadb6e0e137b79b568b387c56cab640554"} Dec 01 09:45:15 crc kubenswrapper[4933]: I1201 09:45:15.820110 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" event={"ID":"db30f3a8-b953-4818-8999-c247744b8c1a","Type":"ContainerStarted","Data":"fc118b39f63decd4a31464583d8221d3e8a0dd446ae36af208a52fe06635fac1"} Dec 01 09:45:19 crc kubenswrapper[4933]: I1201 09:45:19.584999 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:45:19 crc kubenswrapper[4933]: I1201 09:45:19.588355 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:45:19 crc kubenswrapper[4933]: I1201 09:45:19.649180 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:45:19 crc kubenswrapper[4933]: I1201 09:45:19.953739 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:45:20 crc kubenswrapper[4933]: I1201 09:45:20.887919 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" event={"ID":"7c2e1948-244d-4059-9f94-0675dfaa751f","Type":"ContainerStarted","Data":"024b739870fb30557af36d1c416c30e0aa6d3e38cd89fa6d57de6c0fffda90ea"} Dec 01 09:45:20 crc kubenswrapper[4933]: I1201 09:45:20.888527 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" Dec 01 09:45:20 crc kubenswrapper[4933]: I1201 09:45:20.946891 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" podStartSLOduration=2.51413136 podStartE2EDuration="7.946864316s" podCreationTimestamp="2025-12-01 09:45:13 +0000 UTC" firstStartedPulling="2025-12-01 09:45:14.85899958 +0000 UTC m=+805.500723195" lastFinishedPulling="2025-12-01 09:45:20.291732516 +0000 UTC m=+810.933456151" observedRunningTime="2025-12-01 09:45:20.93935231 +0000 UTC m=+811.581075925" watchObservedRunningTime="2025-12-01 09:45:20.946864316 +0000 UTC m=+811.588587931" Dec 01 09:45:21 crc kubenswrapper[4933]: I1201 09:45:21.135818 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqh54"] Dec 01 09:45:22 crc kubenswrapper[4933]: I1201 09:45:22.903887 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" event={"ID":"db30f3a8-b953-4818-8999-c247744b8c1a","Type":"ContainerStarted","Data":"5fc73f83954cc4fb850df4f63728878260509cf834535afc6814a8138d93609b"} Dec 01 09:45:22 crc kubenswrapper[4933]: I1201 09:45:22.904423 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" Dec 01 09:45:22 crc kubenswrapper[4933]: I1201 09:45:22.904035 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zqh54" podUID="f7dad2eb-d8be-4c00-aa0f-1ec672e24458" containerName="registry-server" containerID="cri-o://60698cd638626e49cbf485151caa4f0b750390053a553b70d268ce87be134821" gracePeriod=2 Dec 01 09:45:22 crc kubenswrapper[4933]: I1201 09:45:22.942235 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" podStartSLOduration=2.329428486 podStartE2EDuration="9.942207396s" podCreationTimestamp="2025-12-01 09:45:13 +0000 UTC" firstStartedPulling="2025-12-01 09:45:14.897925352 +0000 UTC m=+805.539648967" lastFinishedPulling="2025-12-01 09:45:22.510704262 +0000 UTC m=+813.152427877" observedRunningTime="2025-12-01 09:45:22.939223082 +0000 UTC m=+813.580946707" watchObservedRunningTime="2025-12-01 09:45:22.942207396 +0000 UTC m=+813.583931031" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.305238 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.452090 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-utilities\") pod \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\" (UID: \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\") " Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.452279 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7m7r\" (UniqueName: \"kubernetes.io/projected/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-kube-api-access-l7m7r\") pod \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\" (UID: \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\") " Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.452436 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-catalog-content\") pod \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\" (UID: \"f7dad2eb-d8be-4c00-aa0f-1ec672e24458\") " Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.453474 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-utilities" (OuterVolumeSpecName: "utilities") pod "f7dad2eb-d8be-4c00-aa0f-1ec672e24458" (UID: "f7dad2eb-d8be-4c00-aa0f-1ec672e24458"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.462831 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-kube-api-access-l7m7r" (OuterVolumeSpecName: "kube-api-access-l7m7r") pod "f7dad2eb-d8be-4c00-aa0f-1ec672e24458" (UID: "f7dad2eb-d8be-4c00-aa0f-1ec672e24458"). InnerVolumeSpecName "kube-api-access-l7m7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.554637 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7m7r\" (UniqueName: \"kubernetes.io/projected/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-kube-api-access-l7m7r\") on node \"crc\" DevicePath \"\"" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.554691 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.566587 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7dad2eb-d8be-4c00-aa0f-1ec672e24458" (UID: "f7dad2eb-d8be-4c00-aa0f-1ec672e24458"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.656603 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7dad2eb-d8be-4c00-aa0f-1ec672e24458-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.912538 4933 generic.go:334] "Generic (PLEG): container finished" podID="f7dad2eb-d8be-4c00-aa0f-1ec672e24458" containerID="60698cd638626e49cbf485151caa4f0b750390053a553b70d268ce87be134821" exitCode=0 Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.912617 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqh54" event={"ID":"f7dad2eb-d8be-4c00-aa0f-1ec672e24458","Type":"ContainerDied","Data":"60698cd638626e49cbf485151caa4f0b750390053a553b70d268ce87be134821"} Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.912713 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqh54" event={"ID":"f7dad2eb-d8be-4c00-aa0f-1ec672e24458","Type":"ContainerDied","Data":"f7e0ea3c897457c5ef66e063b3b7295cdeb88251617d85b8ae8a80d2adc830f5"} Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.912750 4933 scope.go:117] "RemoveContainer" containerID="60698cd638626e49cbf485151caa4f0b750390053a553b70d268ce87be134821" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.913435 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqh54" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.933449 4933 scope.go:117] "RemoveContainer" containerID="08207e8a884be3a6f7ae02e9504ad5832e83f564afb38a5e90f60f120a0195d1" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.937390 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqh54"] Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.943192 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zqh54"] Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.957324 4933 scope.go:117] "RemoveContainer" containerID="c4bd1c5f3271484fe59419a46a9494d4a9f75285a32a5765c5fe30b1536c431f" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.974970 4933 scope.go:117] "RemoveContainer" containerID="60698cd638626e49cbf485151caa4f0b750390053a553b70d268ce87be134821" Dec 01 09:45:23 crc kubenswrapper[4933]: E1201 09:45:23.975576 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60698cd638626e49cbf485151caa4f0b750390053a553b70d268ce87be134821\": container with ID starting with 60698cd638626e49cbf485151caa4f0b750390053a553b70d268ce87be134821 not found: ID does not exist" containerID="60698cd638626e49cbf485151caa4f0b750390053a553b70d268ce87be134821" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.975634 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60698cd638626e49cbf485151caa4f0b750390053a553b70d268ce87be134821"} err="failed to get container status \"60698cd638626e49cbf485151caa4f0b750390053a553b70d268ce87be134821\": rpc error: code = NotFound desc = could not find container \"60698cd638626e49cbf485151caa4f0b750390053a553b70d268ce87be134821\": container with ID starting with 60698cd638626e49cbf485151caa4f0b750390053a553b70d268ce87be134821 not found: ID does not exist" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.975656 4933 scope.go:117] "RemoveContainer" containerID="08207e8a884be3a6f7ae02e9504ad5832e83f564afb38a5e90f60f120a0195d1" Dec 01 09:45:23 crc kubenswrapper[4933]: E1201 09:45:23.975996 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08207e8a884be3a6f7ae02e9504ad5832e83f564afb38a5e90f60f120a0195d1\": container with ID starting with 08207e8a884be3a6f7ae02e9504ad5832e83f564afb38a5e90f60f120a0195d1 not found: ID does not exist" containerID="08207e8a884be3a6f7ae02e9504ad5832e83f564afb38a5e90f60f120a0195d1" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.976041 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08207e8a884be3a6f7ae02e9504ad5832e83f564afb38a5e90f60f120a0195d1"} err="failed to get container status \"08207e8a884be3a6f7ae02e9504ad5832e83f564afb38a5e90f60f120a0195d1\": rpc error: code = NotFound desc = could not find container \"08207e8a884be3a6f7ae02e9504ad5832e83f564afb38a5e90f60f120a0195d1\": container with ID starting with 08207e8a884be3a6f7ae02e9504ad5832e83f564afb38a5e90f60f120a0195d1 not found: ID does not exist" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.976072 4933 scope.go:117] "RemoveContainer" containerID="c4bd1c5f3271484fe59419a46a9494d4a9f75285a32a5765c5fe30b1536c431f" Dec 01 09:45:23 crc kubenswrapper[4933]: E1201 09:45:23.976474 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4bd1c5f3271484fe59419a46a9494d4a9f75285a32a5765c5fe30b1536c431f\": container with ID starting with c4bd1c5f3271484fe59419a46a9494d4a9f75285a32a5765c5fe30b1536c431f not found: ID does not exist" containerID="c4bd1c5f3271484fe59419a46a9494d4a9f75285a32a5765c5fe30b1536c431f" Dec 01 09:45:23 crc kubenswrapper[4933]: I1201 09:45:23.976500 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4bd1c5f3271484fe59419a46a9494d4a9f75285a32a5765c5fe30b1536c431f"} err="failed to get container status \"c4bd1c5f3271484fe59419a46a9494d4a9f75285a32a5765c5fe30b1536c431f\": rpc error: code = NotFound desc = could not find container \"c4bd1c5f3271484fe59419a46a9494d4a9f75285a32a5765c5fe30b1536c431f\": container with ID starting with c4bd1c5f3271484fe59419a46a9494d4a9f75285a32a5765c5fe30b1536c431f not found: ID does not exist" Dec 01 09:45:25 crc kubenswrapper[4933]: I1201 09:45:25.676585 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dad2eb-d8be-4c00-aa0f-1ec672e24458" path="/var/lib/kubelet/pods/f7dad2eb-d8be-4c00-aa0f-1ec672e24458/volumes" Dec 01 09:45:33 crc kubenswrapper[4933]: I1201 09:45:33.926221 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-64d64d5bf5-hc5zj" Dec 01 09:45:41 crc kubenswrapper[4933]: I1201 09:45:41.740900 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:45:41 crc kubenswrapper[4933]: I1201 09:45:41.741974 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:45:41 crc kubenswrapper[4933]: I1201 09:45:41.742074 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:45:41 crc kubenswrapper[4933]: I1201 09:45:41.743512 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b9c9050f180243e388ba92ad81faccae53ee3940480103d59e4ab9a26921bbd"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:45:41 crc kubenswrapper[4933]: I1201 09:45:41.743980 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://9b9c9050f180243e388ba92ad81faccae53ee3940480103d59e4ab9a26921bbd" gracePeriod=600 Dec 01 09:45:42 crc kubenswrapper[4933]: I1201 09:45:42.043643 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="9b9c9050f180243e388ba92ad81faccae53ee3940480103d59e4ab9a26921bbd" exitCode=0 Dec 01 09:45:42 crc kubenswrapper[4933]: I1201 09:45:42.043730 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"9b9c9050f180243e388ba92ad81faccae53ee3940480103d59e4ab9a26921bbd"} Dec 01 09:45:42 crc kubenswrapper[4933]: I1201 09:45:42.044254 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"9380cff48ee91161c6b7a930159a88a7b204cb44f727f0c73879abbb5f388b3e"} Dec 01 09:45:42 crc kubenswrapper[4933]: I1201 09:45:42.044288 4933 scope.go:117] "RemoveContainer" containerID="f1d5bd612e31b9b4eae9b44f24aa08f8e8c6dbbfb00bd3bb8556671820bec1e5" Dec 01 09:45:53 crc kubenswrapper[4933]: I1201 09:45:53.843553 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7d44559b9d-mg6pw" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.720420 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5hxqv"] Dec 01 09:45:54 crc kubenswrapper[4933]: E1201 09:45:54.720801 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7dad2eb-d8be-4c00-aa0f-1ec672e24458" containerName="extract-utilities" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.720826 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dad2eb-d8be-4c00-aa0f-1ec672e24458" containerName="extract-utilities" Dec 01 09:45:54 crc kubenswrapper[4933]: E1201 09:45:54.720855 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7dad2eb-d8be-4c00-aa0f-1ec672e24458" containerName="extract-content" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.720864 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dad2eb-d8be-4c00-aa0f-1ec672e24458" containerName="extract-content" Dec 01 09:45:54 crc kubenswrapper[4933]: E1201 09:45:54.720886 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7dad2eb-d8be-4c00-aa0f-1ec672e24458" containerName="registry-server" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.720893 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dad2eb-d8be-4c00-aa0f-1ec672e24458" containerName="registry-server" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.721049 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7dad2eb-d8be-4c00-aa0f-1ec672e24458" containerName="registry-server" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.723816 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.727996 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.727978 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29"] Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.728106 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.728113 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-dq7dx" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.729454 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.732289 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.762615 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29"] Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.812899 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/554f5a34-1fff-4264-8748-c0a8e78e9490-metrics-certs\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.812993 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw4n7\" (UniqueName: \"kubernetes.io/projected/554f5a34-1fff-4264-8748-c0a8e78e9490-kube-api-access-cw4n7\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.813063 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/554f5a34-1fff-4264-8748-c0a8e78e9490-frr-conf\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.813093 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/554f5a34-1fff-4264-8748-c0a8e78e9490-reloader\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.813120 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/554f5a34-1fff-4264-8748-c0a8e78e9490-frr-sockets\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.813156 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/554f5a34-1fff-4264-8748-c0a8e78e9490-metrics\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.813180 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bbb71b8b-46bf-4013-93d7-f3a58f98b8f0-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kfn29\" (UID: \"bbb71b8b-46bf-4013-93d7-f3a58f98b8f0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.813242 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/554f5a34-1fff-4264-8748-c0a8e78e9490-frr-startup\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.813271 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dhbb\" (UniqueName: \"kubernetes.io/projected/bbb71b8b-46bf-4013-93d7-f3a58f98b8f0-kube-api-access-6dhbb\") pod \"frr-k8s-webhook-server-7fcb986d4-kfn29\" (UID: \"bbb71b8b-46bf-4013-93d7-f3a58f98b8f0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.859688 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-crbpg"] Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.860885 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-crbpg" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.869057 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.869115 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-w9sng" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.869132 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.869403 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.907862 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-rczrx"] Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.909225 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.914955 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.916495 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/554f5a34-1fff-4264-8748-c0a8e78e9490-metrics-certs\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.916558 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw4n7\" (UniqueName: \"kubernetes.io/projected/554f5a34-1fff-4264-8748-c0a8e78e9490-kube-api-access-cw4n7\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.917529 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/554f5a34-1fff-4264-8748-c0a8e78e9490-frr-conf\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.917585 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/554f5a34-1fff-4264-8748-c0a8e78e9490-reloader\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.917608 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/554f5a34-1fff-4264-8748-c0a8e78e9490-frr-sockets\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.917777 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/554f5a34-1fff-4264-8748-c0a8e78e9490-metrics\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.917806 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bbb71b8b-46bf-4013-93d7-f3a58f98b8f0-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kfn29\" (UID: \"bbb71b8b-46bf-4013-93d7-f3a58f98b8f0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.917860 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/554f5a34-1fff-4264-8748-c0a8e78e9490-frr-startup\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.917885 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dhbb\" (UniqueName: \"kubernetes.io/projected/bbb71b8b-46bf-4013-93d7-f3a58f98b8f0-kube-api-access-6dhbb\") pod \"frr-k8s-webhook-server-7fcb986d4-kfn29\" (UID: \"bbb71b8b-46bf-4013-93d7-f3a58f98b8f0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.917980 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/554f5a34-1fff-4264-8748-c0a8e78e9490-frr-conf\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.919954 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/554f5a34-1fff-4264-8748-c0a8e78e9490-frr-startup\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.920504 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/554f5a34-1fff-4264-8748-c0a8e78e9490-reloader\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.922388 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/554f5a34-1fff-4264-8748-c0a8e78e9490-metrics\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.922614 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/554f5a34-1fff-4264-8748-c0a8e78e9490-frr-sockets\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.928333 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/554f5a34-1fff-4264-8748-c0a8e78e9490-metrics-certs\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.930111 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bbb71b8b-46bf-4013-93d7-f3a58f98b8f0-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kfn29\" (UID: \"bbb71b8b-46bf-4013-93d7-f3a58f98b8f0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.932411 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-rczrx"] Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.942033 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw4n7\" (UniqueName: \"kubernetes.io/projected/554f5a34-1fff-4264-8748-c0a8e78e9490-kube-api-access-cw4n7\") pod \"frr-k8s-5hxqv\" (UID: \"554f5a34-1fff-4264-8748-c0a8e78e9490\") " pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:54 crc kubenswrapper[4933]: I1201 09:45:54.945627 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dhbb\" (UniqueName: \"kubernetes.io/projected/bbb71b8b-46bf-4013-93d7-f3a58f98b8f0-kube-api-access-6dhbb\") pod \"frr-k8s-webhook-server-7fcb986d4-kfn29\" (UID: \"bbb71b8b-46bf-4013-93d7-f3a58f98b8f0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.019270 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ab2e723-8322-4328-8afc-4b13397a538c-memberlist\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.019652 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5wz9\" (UniqueName: \"kubernetes.io/projected/e9c82311-fa12-41b3-a4e2-50bca0b1c23f-kube-api-access-s5wz9\") pod \"controller-f8648f98b-rczrx\" (UID: \"e9c82311-fa12-41b3-a4e2-50bca0b1c23f\") " pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.019809 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7ab2e723-8322-4328-8afc-4b13397a538c-metallb-excludel2\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.019948 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ab2e723-8322-4328-8afc-4b13397a538c-metrics-certs\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.020051 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsclc\" (UniqueName: \"kubernetes.io/projected/7ab2e723-8322-4328-8afc-4b13397a538c-kube-api-access-lsclc\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.020137 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9c82311-fa12-41b3-a4e2-50bca0b1c23f-metrics-certs\") pod \"controller-f8648f98b-rczrx\" (UID: \"e9c82311-fa12-41b3-a4e2-50bca0b1c23f\") " pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.020223 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9c82311-fa12-41b3-a4e2-50bca0b1c23f-cert\") pod \"controller-f8648f98b-rczrx\" (UID: \"e9c82311-fa12-41b3-a4e2-50bca0b1c23f\") " pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.063407 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.084240 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.121680 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5wz9\" (UniqueName: \"kubernetes.io/projected/e9c82311-fa12-41b3-a4e2-50bca0b1c23f-kube-api-access-s5wz9\") pod \"controller-f8648f98b-rczrx\" (UID: \"e9c82311-fa12-41b3-a4e2-50bca0b1c23f\") " pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.121773 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7ab2e723-8322-4328-8afc-4b13397a538c-metallb-excludel2\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.121827 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ab2e723-8322-4328-8afc-4b13397a538c-metrics-certs\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.121881 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsclc\" (UniqueName: \"kubernetes.io/projected/7ab2e723-8322-4328-8afc-4b13397a538c-kube-api-access-lsclc\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.121918 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9c82311-fa12-41b3-a4e2-50bca0b1c23f-metrics-certs\") pod \"controller-f8648f98b-rczrx\" (UID: \"e9c82311-fa12-41b3-a4e2-50bca0b1c23f\") " pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.121952 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9c82311-fa12-41b3-a4e2-50bca0b1c23f-cert\") pod \"controller-f8648f98b-rczrx\" (UID: \"e9c82311-fa12-41b3-a4e2-50bca0b1c23f\") " pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.122020 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ab2e723-8322-4328-8afc-4b13397a538c-memberlist\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:55 crc kubenswrapper[4933]: E1201 09:45:55.122151 4933 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 01 09:45:55 crc kubenswrapper[4933]: E1201 09:45:55.122220 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab2e723-8322-4328-8afc-4b13397a538c-memberlist podName:7ab2e723-8322-4328-8afc-4b13397a538c nodeName:}" failed. No retries permitted until 2025-12-01 09:45:55.62219408 +0000 UTC m=+846.263917705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7ab2e723-8322-4328-8afc-4b13397a538c-memberlist") pod "speaker-crbpg" (UID: "7ab2e723-8322-4328-8afc-4b13397a538c") : secret "metallb-memberlist" not found Dec 01 09:45:55 crc kubenswrapper[4933]: E1201 09:45:55.122520 4933 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 01 09:45:55 crc kubenswrapper[4933]: E1201 09:45:55.122568 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9c82311-fa12-41b3-a4e2-50bca0b1c23f-metrics-certs podName:e9c82311-fa12-41b3-a4e2-50bca0b1c23f nodeName:}" failed. No retries permitted until 2025-12-01 09:45:55.62255383 +0000 UTC m=+846.264277435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9c82311-fa12-41b3-a4e2-50bca0b1c23f-metrics-certs") pod "controller-f8648f98b-rczrx" (UID: "e9c82311-fa12-41b3-a4e2-50bca0b1c23f") : secret "controller-certs-secret" not found Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.123091 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7ab2e723-8322-4328-8afc-4b13397a538c-metallb-excludel2\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.125301 4933 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.130764 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ab2e723-8322-4328-8afc-4b13397a538c-metrics-certs\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.140115 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9c82311-fa12-41b3-a4e2-50bca0b1c23f-cert\") pod \"controller-f8648f98b-rczrx\" (UID: \"e9c82311-fa12-41b3-a4e2-50bca0b1c23f\") " pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.148895 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsclc\" (UniqueName: \"kubernetes.io/projected/7ab2e723-8322-4328-8afc-4b13397a538c-kube-api-access-lsclc\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.149230 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5wz9\" (UniqueName: \"kubernetes.io/projected/e9c82311-fa12-41b3-a4e2-50bca0b1c23f-kube-api-access-s5wz9\") pod \"controller-f8648f98b-rczrx\" (UID: \"e9c82311-fa12-41b3-a4e2-50bca0b1c23f\") " pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.540962 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29"] Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.629926 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ab2e723-8322-4328-8afc-4b13397a538c-memberlist\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.630058 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9c82311-fa12-41b3-a4e2-50bca0b1c23f-metrics-certs\") pod \"controller-f8648f98b-rczrx\" (UID: \"e9c82311-fa12-41b3-a4e2-50bca0b1c23f\") " pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:45:55 crc kubenswrapper[4933]: E1201 09:45:55.630141 4933 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 01 09:45:55 crc kubenswrapper[4933]: E1201 09:45:55.630250 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab2e723-8322-4328-8afc-4b13397a538c-memberlist podName:7ab2e723-8322-4328-8afc-4b13397a538c nodeName:}" failed. No retries permitted until 2025-12-01 09:45:56.630220619 +0000 UTC m=+847.271944234 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7ab2e723-8322-4328-8afc-4b13397a538c-memberlist") pod "speaker-crbpg" (UID: "7ab2e723-8322-4328-8afc-4b13397a538c") : secret "metallb-memberlist" not found Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.635205 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9c82311-fa12-41b3-a4e2-50bca0b1c23f-metrics-certs\") pod \"controller-f8648f98b-rczrx\" (UID: \"e9c82311-fa12-41b3-a4e2-50bca0b1c23f\") " pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:45:55 crc kubenswrapper[4933]: I1201 09:45:55.892309 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:45:56 crc kubenswrapper[4933]: I1201 09:45:56.145718 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5hxqv" event={"ID":"554f5a34-1fff-4264-8748-c0a8e78e9490","Type":"ContainerStarted","Data":"bd537c8008418154dc23288f05d45930afebd7f895d18fc47f694b4181376ecc"} Dec 01 09:45:56 crc kubenswrapper[4933]: I1201 09:45:56.148024 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29" event={"ID":"bbb71b8b-46bf-4013-93d7-f3a58f98b8f0","Type":"ContainerStarted","Data":"af1abd9e1729d041f8ef94730797c80ade9c401c97d6307a1028d0b19edf9edd"} Dec 01 09:45:56 crc kubenswrapper[4933]: I1201 09:45:56.177758 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-rczrx"] Dec 01 09:45:56 crc kubenswrapper[4933]: W1201 09:45:56.183393 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9c82311_fa12_41b3_a4e2_50bca0b1c23f.slice/crio-6e888cc8f70cb7b898881b19ec10e295cc7360c02f0081de75ed34c1f8f36500 WatchSource:0}: Error finding container 6e888cc8f70cb7b898881b19ec10e295cc7360c02f0081de75ed34c1f8f36500: Status 404 returned error can't find the container with id 6e888cc8f70cb7b898881b19ec10e295cc7360c02f0081de75ed34c1f8f36500 Dec 01 09:45:56 crc kubenswrapper[4933]: I1201 09:45:56.647759 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ab2e723-8322-4328-8afc-4b13397a538c-memberlist\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:56 crc kubenswrapper[4933]: I1201 09:45:56.655823 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ab2e723-8322-4328-8afc-4b13397a538c-memberlist\") pod \"speaker-crbpg\" (UID: \"7ab2e723-8322-4328-8afc-4b13397a538c\") " pod="metallb-system/speaker-crbpg" Dec 01 09:45:56 crc kubenswrapper[4933]: I1201 09:45:56.686569 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-crbpg" Dec 01 09:45:56 crc kubenswrapper[4933]: W1201 09:45:56.719348 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ab2e723_8322_4328_8afc_4b13397a538c.slice/crio-a69473dbc6c351aff84935eda058ec6ffaa42d94080c8e46525bb6ca23e3b913 WatchSource:0}: Error finding container a69473dbc6c351aff84935eda058ec6ffaa42d94080c8e46525bb6ca23e3b913: Status 404 returned error can't find the container with id a69473dbc6c351aff84935eda058ec6ffaa42d94080c8e46525bb6ca23e3b913 Dec 01 09:45:57 crc kubenswrapper[4933]: I1201 09:45:57.165287 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-crbpg" event={"ID":"7ab2e723-8322-4328-8afc-4b13397a538c","Type":"ContainerStarted","Data":"e5169ca70be65eb43c2e30056e7e4dc5d1ef92987d1a78cd294e9989378a5c95"} Dec 01 09:45:57 crc kubenswrapper[4933]: I1201 09:45:57.165495 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-crbpg" event={"ID":"7ab2e723-8322-4328-8afc-4b13397a538c","Type":"ContainerStarted","Data":"a69473dbc6c351aff84935eda058ec6ffaa42d94080c8e46525bb6ca23e3b913"} Dec 01 09:45:57 crc kubenswrapper[4933]: I1201 09:45:57.169102 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-rczrx" event={"ID":"e9c82311-fa12-41b3-a4e2-50bca0b1c23f","Type":"ContainerStarted","Data":"a3ea2c2e053e0cb06aff2faea184e3b4febcea89c48f39011a272a3c2468e5b5"} Dec 01 09:45:57 crc kubenswrapper[4933]: I1201 09:45:57.169142 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-rczrx" event={"ID":"e9c82311-fa12-41b3-a4e2-50bca0b1c23f","Type":"ContainerStarted","Data":"f97e7d7f90475d7acf37108b355a1cba0e0c8421ab0c74d061a7d4ff8c143af5"} Dec 01 09:45:57 crc kubenswrapper[4933]: I1201 09:45:57.169157 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-rczrx" event={"ID":"e9c82311-fa12-41b3-a4e2-50bca0b1c23f","Type":"ContainerStarted","Data":"6e888cc8f70cb7b898881b19ec10e295cc7360c02f0081de75ed34c1f8f36500"} Dec 01 09:45:57 crc kubenswrapper[4933]: I1201 09:45:57.169400 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:45:57 crc kubenswrapper[4933]: I1201 09:45:57.193163 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-rczrx" podStartSLOduration=3.193135655 podStartE2EDuration="3.193135655s" podCreationTimestamp="2025-12-01 09:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:45:57.192639363 +0000 UTC m=+847.834362978" watchObservedRunningTime="2025-12-01 09:45:57.193135655 +0000 UTC m=+847.834859270" Dec 01 09:45:58 crc kubenswrapper[4933]: I1201 09:45:58.205240 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-crbpg" event={"ID":"7ab2e723-8322-4328-8afc-4b13397a538c","Type":"ContainerStarted","Data":"460cca6c53df5646e541e43be82c8239e60df9772ef3160e89e3a56a7ed70e43"} Dec 01 09:45:58 crc kubenswrapper[4933]: I1201 09:45:58.207811 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-crbpg" Dec 01 09:45:58 crc kubenswrapper[4933]: I1201 09:45:58.244806 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-crbpg" podStartSLOduration=4.244768463 podStartE2EDuration="4.244768463s" podCreationTimestamp="2025-12-01 09:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:45:58.231405392 +0000 UTC m=+848.873129017" watchObservedRunningTime="2025-12-01 09:45:58.244768463 +0000 UTC m=+848.886492078" Dec 01 09:46:05 crc kubenswrapper[4933]: I1201 09:46:05.387079 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29" event={"ID":"bbb71b8b-46bf-4013-93d7-f3a58f98b8f0","Type":"ContainerStarted","Data":"86c0cf4c91f06537bbeb55ac86cf43b5ad123209989f99534cff4fdc312fea02"} Dec 01 09:46:05 crc kubenswrapper[4933]: I1201 09:46:05.388042 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29" Dec 01 09:46:05 crc kubenswrapper[4933]: I1201 09:46:05.389725 4933 generic.go:334] "Generic (PLEG): container finished" podID="554f5a34-1fff-4264-8748-c0a8e78e9490" containerID="05dae7745a6e32ce79cdbcd8b6faf700945120729e9f736323530c93663209e5" exitCode=0 Dec 01 09:46:05 crc kubenswrapper[4933]: I1201 09:46:05.389786 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5hxqv" event={"ID":"554f5a34-1fff-4264-8748-c0a8e78e9490","Type":"ContainerDied","Data":"05dae7745a6e32ce79cdbcd8b6faf700945120729e9f736323530c93663209e5"} Dec 01 09:46:05 crc kubenswrapper[4933]: I1201 09:46:05.440771 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29" podStartSLOduration=2.630569889 podStartE2EDuration="11.440734873s" podCreationTimestamp="2025-12-01 09:45:54 +0000 UTC" firstStartedPulling="2025-12-01 09:45:55.54968787 +0000 UTC m=+846.191411485" lastFinishedPulling="2025-12-01 09:46:04.359852854 +0000 UTC m=+855.001576469" observedRunningTime="2025-12-01 09:46:05.405351339 +0000 UTC m=+856.047074954" watchObservedRunningTime="2025-12-01 09:46:05.440734873 +0000 UTC m=+856.082458478" Dec 01 09:46:06 crc kubenswrapper[4933]: I1201 09:46:06.397964 4933 generic.go:334] "Generic (PLEG): container finished" podID="554f5a34-1fff-4264-8748-c0a8e78e9490" containerID="95d20f8481e49f34e786ca36750cacd1c2b169a9b8564a44967a82bf968345e8" exitCode=0 Dec 01 09:46:06 crc kubenswrapper[4933]: I1201 09:46:06.398031 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5hxqv" event={"ID":"554f5a34-1fff-4264-8748-c0a8e78e9490","Type":"ContainerDied","Data":"95d20f8481e49f34e786ca36750cacd1c2b169a9b8564a44967a82bf968345e8"} Dec 01 09:46:07 crc kubenswrapper[4933]: I1201 09:46:07.407705 4933 generic.go:334] "Generic (PLEG): container finished" podID="554f5a34-1fff-4264-8748-c0a8e78e9490" containerID="5b2c54519fc026f81bd9a2671e7a8d7abba975c58f5430155caf1e285b385593" exitCode=0 Dec 01 09:46:07 crc kubenswrapper[4933]: I1201 09:46:07.407772 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5hxqv" event={"ID":"554f5a34-1fff-4264-8748-c0a8e78e9490","Type":"ContainerDied","Data":"5b2c54519fc026f81bd9a2671e7a8d7abba975c58f5430155caf1e285b385593"} Dec 01 09:46:08 crc kubenswrapper[4933]: I1201 09:46:08.419567 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5hxqv" event={"ID":"554f5a34-1fff-4264-8748-c0a8e78e9490","Type":"ContainerStarted","Data":"c2b8d53f3c285a137ffd9c7e5724ae995ebcac2175cb84b7ffe4a094c5d60881"} Dec 01 09:46:08 crc kubenswrapper[4933]: I1201 09:46:08.420143 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5hxqv" event={"ID":"554f5a34-1fff-4264-8748-c0a8e78e9490","Type":"ContainerStarted","Data":"b908efd667f7d9213e4d3dcb01edec3d0d3f9988fcbed8952c9471a71411cc09"} Dec 01 09:46:08 crc kubenswrapper[4933]: I1201 09:46:08.420157 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5hxqv" event={"ID":"554f5a34-1fff-4264-8748-c0a8e78e9490","Type":"ContainerStarted","Data":"863da7cbf3e6e38b2fb796696f51cd6b6c0f776dd26eaf1d1d15d44ec282c984"} Dec 01 09:46:09 crc kubenswrapper[4933]: I1201 09:46:09.432066 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5hxqv" event={"ID":"554f5a34-1fff-4264-8748-c0a8e78e9490","Type":"ContainerStarted","Data":"654ec3bef8f9d88fc61ca903052ac602b2b97590d4d23d921955e0f0a1df725d"} Dec 01 09:46:09 crc kubenswrapper[4933]: I1201 09:46:09.434444 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5hxqv" event={"ID":"554f5a34-1fff-4264-8748-c0a8e78e9490","Type":"ContainerStarted","Data":"e07a1529b3239f425c6cea6eca3d91daa41970fd05eb5ae9b1bfbecf161d7de6"} Dec 01 09:46:10 crc kubenswrapper[4933]: I1201 09:46:10.448088 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5hxqv" event={"ID":"554f5a34-1fff-4264-8748-c0a8e78e9490","Type":"ContainerStarted","Data":"4f61585205c3dc7cb80e782efc26f03b16426bed5f8b2c600d41a9618702f9dc"} Dec 01 09:46:10 crc kubenswrapper[4933]: I1201 09:46:10.476267 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5hxqv" podStartSLOduration=7.365418587 podStartE2EDuration="16.476234037s" podCreationTimestamp="2025-12-01 09:45:54 +0000 UTC" firstStartedPulling="2025-12-01 09:45:55.231944752 +0000 UTC m=+845.873668367" lastFinishedPulling="2025-12-01 09:46:04.342760202 +0000 UTC m=+854.984483817" observedRunningTime="2025-12-01 09:46:10.475606372 +0000 UTC m=+861.117330007" watchObservedRunningTime="2025-12-01 09:46:10.476234037 +0000 UTC m=+861.117957652" Dec 01 09:46:11 crc kubenswrapper[4933]: I1201 09:46:11.454923 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:46:15 crc kubenswrapper[4933]: I1201 09:46:15.064097 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:46:15 crc kubenswrapper[4933]: I1201 09:46:15.093431 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kfn29" Dec 01 09:46:15 crc kubenswrapper[4933]: I1201 09:46:15.106042 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:46:15 crc kubenswrapper[4933]: I1201 09:46:15.899064 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-rczrx" Dec 01 09:46:16 crc kubenswrapper[4933]: I1201 09:46:16.691903 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-crbpg" Dec 01 09:46:19 crc kubenswrapper[4933]: I1201 09:46:19.594007 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nkh74"] Dec 01 09:46:19 crc kubenswrapper[4933]: I1201 09:46:19.595779 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nkh74" Dec 01 09:46:19 crc kubenswrapper[4933]: I1201 09:46:19.598145 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 01 09:46:19 crc kubenswrapper[4933]: I1201 09:46:19.598527 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 01 09:46:19 crc kubenswrapper[4933]: I1201 09:46:19.599919 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8cskv" Dec 01 09:46:19 crc kubenswrapper[4933]: I1201 09:46:19.618444 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nkh74"] Dec 01 09:46:19 crc kubenswrapper[4933]: I1201 09:46:19.691021 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sg6g\" (UniqueName: \"kubernetes.io/projected/28a1145e-4e3c-4c13-9db6-8dd35945957c-kube-api-access-7sg6g\") pod \"openstack-operator-index-nkh74\" (UID: \"28a1145e-4e3c-4c13-9db6-8dd35945957c\") " pod="openstack-operators/openstack-operator-index-nkh74" Dec 01 09:46:19 crc kubenswrapper[4933]: I1201 09:46:19.792720 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sg6g\" (UniqueName: \"kubernetes.io/projected/28a1145e-4e3c-4c13-9db6-8dd35945957c-kube-api-access-7sg6g\") pod \"openstack-operator-index-nkh74\" (UID: \"28a1145e-4e3c-4c13-9db6-8dd35945957c\") " pod="openstack-operators/openstack-operator-index-nkh74" Dec 01 09:46:19 crc kubenswrapper[4933]: I1201 09:46:19.815995 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sg6g\" (UniqueName: \"kubernetes.io/projected/28a1145e-4e3c-4c13-9db6-8dd35945957c-kube-api-access-7sg6g\") pod \"openstack-operator-index-nkh74\" (UID: \"28a1145e-4e3c-4c13-9db6-8dd35945957c\") " pod="openstack-operators/openstack-operator-index-nkh74" Dec 01 09:46:19 crc kubenswrapper[4933]: I1201 09:46:19.921443 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nkh74" Dec 01 09:46:20 crc kubenswrapper[4933]: I1201 09:46:20.156040 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nkh74"] Dec 01 09:46:20 crc kubenswrapper[4933]: W1201 09:46:20.159372 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28a1145e_4e3c_4c13_9db6_8dd35945957c.slice/crio-f9f7d1654165190a50f049de04c8bb6c9473cf7b0a6b287a3418a907e310dff5 WatchSource:0}: Error finding container f9f7d1654165190a50f049de04c8bb6c9473cf7b0a6b287a3418a907e310dff5: Status 404 returned error can't find the container with id f9f7d1654165190a50f049de04c8bb6c9473cf7b0a6b287a3418a907e310dff5 Dec 01 09:46:20 crc kubenswrapper[4933]: I1201 09:46:20.517712 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nkh74" event={"ID":"28a1145e-4e3c-4c13-9db6-8dd35945957c","Type":"ContainerStarted","Data":"f9f7d1654165190a50f049de04c8bb6c9473cf7b0a6b287a3418a907e310dff5"} Dec 01 09:46:22 crc kubenswrapper[4933]: I1201 09:46:22.967947 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nkh74"] Dec 01 09:46:23 crc kubenswrapper[4933]: I1201 09:46:23.535849 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nkh74" event={"ID":"28a1145e-4e3c-4c13-9db6-8dd35945957c","Type":"ContainerStarted","Data":"671ab27a87c8dd8467d63815c12e217f9f30f833d6970e239665f70cb6deaa39"} Dec 01 09:46:23 crc kubenswrapper[4933]: I1201 09:46:23.568300 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nkh74" podStartSLOduration=2.222448874 podStartE2EDuration="4.568266149s" podCreationTimestamp="2025-12-01 09:46:19 +0000 UTC" firstStartedPulling="2025-12-01 09:46:20.164147482 +0000 UTC m=+870.805871087" lastFinishedPulling="2025-12-01 09:46:22.509964737 +0000 UTC m=+873.151688362" observedRunningTime="2025-12-01 09:46:23.560874917 +0000 UTC m=+874.202598542" watchObservedRunningTime="2025-12-01 09:46:23.568266149 +0000 UTC m=+874.209989765" Dec 01 09:46:23 crc kubenswrapper[4933]: I1201 09:46:23.612300 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vnqpf"] Dec 01 09:46:23 crc kubenswrapper[4933]: I1201 09:46:23.613091 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vnqpf" Dec 01 09:46:23 crc kubenswrapper[4933]: I1201 09:46:23.653245 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdnt2\" (UniqueName: \"kubernetes.io/projected/61a68407-8b55-4951-aa9b-8f2348e5b3b1-kube-api-access-wdnt2\") pod \"openstack-operator-index-vnqpf\" (UID: \"61a68407-8b55-4951-aa9b-8f2348e5b3b1\") " pod="openstack-operators/openstack-operator-index-vnqpf" Dec 01 09:46:23 crc kubenswrapper[4933]: I1201 09:46:23.697089 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vnqpf"] Dec 01 09:46:23 crc kubenswrapper[4933]: I1201 09:46:23.754781 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdnt2\" (UniqueName: \"kubernetes.io/projected/61a68407-8b55-4951-aa9b-8f2348e5b3b1-kube-api-access-wdnt2\") pod \"openstack-operator-index-vnqpf\" (UID: \"61a68407-8b55-4951-aa9b-8f2348e5b3b1\") " pod="openstack-operators/openstack-operator-index-vnqpf" Dec 01 09:46:23 crc kubenswrapper[4933]: I1201 09:46:23.783933 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdnt2\" (UniqueName: \"kubernetes.io/projected/61a68407-8b55-4951-aa9b-8f2348e5b3b1-kube-api-access-wdnt2\") pod \"openstack-operator-index-vnqpf\" (UID: \"61a68407-8b55-4951-aa9b-8f2348e5b3b1\") " pod="openstack-operators/openstack-operator-index-vnqpf" Dec 01 09:46:23 crc kubenswrapper[4933]: I1201 09:46:23.994192 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vnqpf" Dec 01 09:46:24 crc kubenswrapper[4933]: I1201 09:46:24.489431 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vnqpf"] Dec 01 09:46:24 crc kubenswrapper[4933]: W1201 09:46:24.494670 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61a68407_8b55_4951_aa9b_8f2348e5b3b1.slice/crio-ea6240f2bc0129e7b74cd096c09405ce2705051b6b39c87f1803eba44ceaddb1 WatchSource:0}: Error finding container ea6240f2bc0129e7b74cd096c09405ce2705051b6b39c87f1803eba44ceaddb1: Status 404 returned error can't find the container with id ea6240f2bc0129e7b74cd096c09405ce2705051b6b39c87f1803eba44ceaddb1 Dec 01 09:46:24 crc kubenswrapper[4933]: I1201 09:46:24.545889 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vnqpf" event={"ID":"61a68407-8b55-4951-aa9b-8f2348e5b3b1","Type":"ContainerStarted","Data":"ea6240f2bc0129e7b74cd096c09405ce2705051b6b39c87f1803eba44ceaddb1"} Dec 01 09:46:24 crc kubenswrapper[4933]: I1201 09:46:24.546036 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-nkh74" podUID="28a1145e-4e3c-4c13-9db6-8dd35945957c" containerName="registry-server" containerID="cri-o://671ab27a87c8dd8467d63815c12e217f9f30f833d6970e239665f70cb6deaa39" gracePeriod=2 Dec 01 09:46:24 crc kubenswrapper[4933]: I1201 09:46:24.931049 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nkh74" Dec 01 09:46:24 crc kubenswrapper[4933]: I1201 09:46:24.973886 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sg6g\" (UniqueName: \"kubernetes.io/projected/28a1145e-4e3c-4c13-9db6-8dd35945957c-kube-api-access-7sg6g\") pod \"28a1145e-4e3c-4c13-9db6-8dd35945957c\" (UID: \"28a1145e-4e3c-4c13-9db6-8dd35945957c\") " Dec 01 09:46:24 crc kubenswrapper[4933]: I1201 09:46:24.979578 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a1145e-4e3c-4c13-9db6-8dd35945957c-kube-api-access-7sg6g" (OuterVolumeSpecName: "kube-api-access-7sg6g") pod "28a1145e-4e3c-4c13-9db6-8dd35945957c" (UID: "28a1145e-4e3c-4c13-9db6-8dd35945957c"). InnerVolumeSpecName "kube-api-access-7sg6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.068842 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5hxqv" Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.076652 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sg6g\" (UniqueName: \"kubernetes.io/projected/28a1145e-4e3c-4c13-9db6-8dd35945957c-kube-api-access-7sg6g\") on node \"crc\" DevicePath \"\"" Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.556946 4933 generic.go:334] "Generic (PLEG): container finished" podID="28a1145e-4e3c-4c13-9db6-8dd35945957c" containerID="671ab27a87c8dd8467d63815c12e217f9f30f833d6970e239665f70cb6deaa39" exitCode=0 Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.557059 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nkh74" Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.557150 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nkh74" event={"ID":"28a1145e-4e3c-4c13-9db6-8dd35945957c","Type":"ContainerDied","Data":"671ab27a87c8dd8467d63815c12e217f9f30f833d6970e239665f70cb6deaa39"} Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.557348 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nkh74" event={"ID":"28a1145e-4e3c-4c13-9db6-8dd35945957c","Type":"ContainerDied","Data":"f9f7d1654165190a50f049de04c8bb6c9473cf7b0a6b287a3418a907e310dff5"} Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.557386 4933 scope.go:117] "RemoveContainer" containerID="671ab27a87c8dd8467d63815c12e217f9f30f833d6970e239665f70cb6deaa39" Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.560818 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vnqpf" event={"ID":"61a68407-8b55-4951-aa9b-8f2348e5b3b1","Type":"ContainerStarted","Data":"8b20534b27b7853437f3afd6dfc0aeb43359c92d871588410119d2ad514a60f6"} Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.586385 4933 scope.go:117] "RemoveContainer" containerID="671ab27a87c8dd8467d63815c12e217f9f30f833d6970e239665f70cb6deaa39" Dec 01 09:46:25 crc kubenswrapper[4933]: E1201 09:46:25.587265 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671ab27a87c8dd8467d63815c12e217f9f30f833d6970e239665f70cb6deaa39\": container with ID starting with 671ab27a87c8dd8467d63815c12e217f9f30f833d6970e239665f70cb6deaa39 not found: ID does not exist" containerID="671ab27a87c8dd8467d63815c12e217f9f30f833d6970e239665f70cb6deaa39" Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.587356 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"671ab27a87c8dd8467d63815c12e217f9f30f833d6970e239665f70cb6deaa39"} err="failed to get container status \"671ab27a87c8dd8467d63815c12e217f9f30f833d6970e239665f70cb6deaa39\": rpc error: code = NotFound desc = could not find container \"671ab27a87c8dd8467d63815c12e217f9f30f833d6970e239665f70cb6deaa39\": container with ID starting with 671ab27a87c8dd8467d63815c12e217f9f30f833d6970e239665f70cb6deaa39 not found: ID does not exist" Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.592353 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vnqpf" podStartSLOduration=2.512411944 podStartE2EDuration="2.592293117s" podCreationTimestamp="2025-12-01 09:46:23 +0000 UTC" firstStartedPulling="2025-12-01 09:46:24.500653552 +0000 UTC m=+875.142377167" lastFinishedPulling="2025-12-01 09:46:24.580534725 +0000 UTC m=+875.222258340" observedRunningTime="2025-12-01 09:46:25.584787641 +0000 UTC m=+876.226511256" watchObservedRunningTime="2025-12-01 09:46:25.592293117 +0000 UTC m=+876.234016772" Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.604847 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nkh74"] Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.609808 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-nkh74"] Dec 01 09:46:25 crc kubenswrapper[4933]: I1201 09:46:25.675564 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a1145e-4e3c-4c13-9db6-8dd35945957c" path="/var/lib/kubelet/pods/28a1145e-4e3c-4c13-9db6-8dd35945957c/volumes" Dec 01 09:46:33 crc kubenswrapper[4933]: I1201 09:46:33.995057 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-vnqpf" Dec 01 09:46:33 crc kubenswrapper[4933]: I1201 09:46:33.995900 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-vnqpf" Dec 01 09:46:34 crc kubenswrapper[4933]: I1201 09:46:34.040181 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-vnqpf" Dec 01 09:46:34 crc kubenswrapper[4933]: I1201 09:46:34.664729 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-vnqpf" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.419460 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d4khf"] Dec 01 09:46:37 crc kubenswrapper[4933]: E1201 09:46:37.420611 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a1145e-4e3c-4c13-9db6-8dd35945957c" containerName="registry-server" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.420636 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a1145e-4e3c-4c13-9db6-8dd35945957c" containerName="registry-server" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.420864 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a1145e-4e3c-4c13-9db6-8dd35945957c" containerName="registry-server" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.422879 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.429100 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4khf"] Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.473589 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc6213c-dbcc-474b-af78-1218910b00d0-catalog-content\") pod \"redhat-marketplace-d4khf\" (UID: \"dcc6213c-dbcc-474b-af78-1218910b00d0\") " pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.473666 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz2xb\" (UniqueName: \"kubernetes.io/projected/dcc6213c-dbcc-474b-af78-1218910b00d0-kube-api-access-zz2xb\") pod \"redhat-marketplace-d4khf\" (UID: \"dcc6213c-dbcc-474b-af78-1218910b00d0\") " pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.473747 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc6213c-dbcc-474b-af78-1218910b00d0-utilities\") pod \"redhat-marketplace-d4khf\" (UID: \"dcc6213c-dbcc-474b-af78-1218910b00d0\") " pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.575327 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc6213c-dbcc-474b-af78-1218910b00d0-catalog-content\") pod \"redhat-marketplace-d4khf\" (UID: \"dcc6213c-dbcc-474b-af78-1218910b00d0\") " pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.575888 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz2xb\" (UniqueName: \"kubernetes.io/projected/dcc6213c-dbcc-474b-af78-1218910b00d0-kube-api-access-zz2xb\") pod \"redhat-marketplace-d4khf\" (UID: \"dcc6213c-dbcc-474b-af78-1218910b00d0\") " pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.575958 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc6213c-dbcc-474b-af78-1218910b00d0-utilities\") pod \"redhat-marketplace-d4khf\" (UID: \"dcc6213c-dbcc-474b-af78-1218910b00d0\") " pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.575884 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc6213c-dbcc-474b-af78-1218910b00d0-catalog-content\") pod \"redhat-marketplace-d4khf\" (UID: \"dcc6213c-dbcc-474b-af78-1218910b00d0\") " pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.576496 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc6213c-dbcc-474b-af78-1218910b00d0-utilities\") pod \"redhat-marketplace-d4khf\" (UID: \"dcc6213c-dbcc-474b-af78-1218910b00d0\") " pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.598659 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz2xb\" (UniqueName: \"kubernetes.io/projected/dcc6213c-dbcc-474b-af78-1218910b00d0-kube-api-access-zz2xb\") pod \"redhat-marketplace-d4khf\" (UID: \"dcc6213c-dbcc-474b-af78-1218910b00d0\") " pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:37 crc kubenswrapper[4933]: I1201 09:46:37.746447 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:38 crc kubenswrapper[4933]: I1201 09:46:38.176606 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4khf"] Dec 01 09:46:38 crc kubenswrapper[4933]: I1201 09:46:38.664682 4933 generic.go:334] "Generic (PLEG): container finished" podID="dcc6213c-dbcc-474b-af78-1218910b00d0" containerID="cd9c4c747d2f4103e98c86bf9927d681d1bdf269905c3ded2095cbcc0a656e57" exitCode=0 Dec 01 09:46:38 crc kubenswrapper[4933]: I1201 09:46:38.664774 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4khf" event={"ID":"dcc6213c-dbcc-474b-af78-1218910b00d0","Type":"ContainerDied","Data":"cd9c4c747d2f4103e98c86bf9927d681d1bdf269905c3ded2095cbcc0a656e57"} Dec 01 09:46:38 crc kubenswrapper[4933]: I1201 09:46:38.665181 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4khf" event={"ID":"dcc6213c-dbcc-474b-af78-1218910b00d0","Type":"ContainerStarted","Data":"dd43d4ed9c5637fb6bb53c06dc2278b52b3eef00622fdeb12cf1d37603aa75ee"} Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.644272 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7"] Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.646439 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.657556 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7"] Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.697847 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-qkr8l" Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.755780 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4khf" event={"ID":"dcc6213c-dbcc-474b-af78-1218910b00d0","Type":"ContainerStarted","Data":"df94e9e8c6ebe778fedb6607b3bdf1dba62fc5b115a6d4ae407c10163ece0ef6"} Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.807279 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-util\") pod \"898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7\" (UID: \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\") " pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.807583 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgwz9\" (UniqueName: \"kubernetes.io/projected/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-kube-api-access-qgwz9\") pod \"898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7\" (UID: \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\") " pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.807714 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-bundle\") pod \"898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7\" (UID: \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\") " pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.909555 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-util\") pod \"898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7\" (UID: \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\") " pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.910000 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgwz9\" (UniqueName: \"kubernetes.io/projected/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-kube-api-access-qgwz9\") pod \"898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7\" (UID: \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\") " pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.910140 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-bundle\") pod \"898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7\" (UID: \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\") " pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.910321 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-util\") pod \"898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7\" (UID: \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\") " pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.910719 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-bundle\") pod \"898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7\" (UID: \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\") " pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.932166 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgwz9\" (UniqueName: \"kubernetes.io/projected/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-kube-api-access-qgwz9\") pod \"898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7\" (UID: \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\") " pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" Dec 01 09:46:40 crc kubenswrapper[4933]: I1201 09:46:40.968637 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" Dec 01 09:46:41 crc kubenswrapper[4933]: I1201 09:46:41.758542 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7"] Dec 01 09:46:41 crc kubenswrapper[4933]: I1201 09:46:41.764378 4933 generic.go:334] "Generic (PLEG): container finished" podID="dcc6213c-dbcc-474b-af78-1218910b00d0" containerID="df94e9e8c6ebe778fedb6607b3bdf1dba62fc5b115a6d4ae407c10163ece0ef6" exitCode=0 Dec 01 09:46:41 crc kubenswrapper[4933]: I1201 09:46:41.764441 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4khf" event={"ID":"dcc6213c-dbcc-474b-af78-1218910b00d0","Type":"ContainerDied","Data":"df94e9e8c6ebe778fedb6607b3bdf1dba62fc5b115a6d4ae407c10163ece0ef6"} Dec 01 09:46:42 crc kubenswrapper[4933]: I1201 09:46:42.778530 4933 generic.go:334] "Generic (PLEG): container finished" podID="e9c7ed8e-3041-437c-a3f0-b8c2cf94c503" containerID="5fd039eac413ef116730251685d76c2ec46420bd582b8562fc1e7c7af58e04ff" exitCode=0 Dec 01 09:46:42 crc kubenswrapper[4933]: I1201 09:46:42.778619 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" event={"ID":"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503","Type":"ContainerDied","Data":"5fd039eac413ef116730251685d76c2ec46420bd582b8562fc1e7c7af58e04ff"} Dec 01 09:46:42 crc kubenswrapper[4933]: I1201 09:46:42.778655 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" event={"ID":"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503","Type":"ContainerStarted","Data":"8bfa1d58da01091914eca45d4312283d287e4f0a4cb1d1f23f683fd0035e0f32"} Dec 01 09:46:42 crc kubenswrapper[4933]: I1201 09:46:42.782091 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4khf" event={"ID":"dcc6213c-dbcc-474b-af78-1218910b00d0","Type":"ContainerStarted","Data":"dc02eef1b4395beb9622323705d94c4fd4a93aa1a9f74d0f3972d0c79c98d66c"} Dec 01 09:46:42 crc kubenswrapper[4933]: I1201 09:46:42.829196 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d4khf" podStartSLOduration=2.138553832 podStartE2EDuration="5.829172876s" podCreationTimestamp="2025-12-01 09:46:37 +0000 UTC" firstStartedPulling="2025-12-01 09:46:38.666492191 +0000 UTC m=+889.308215806" lastFinishedPulling="2025-12-01 09:46:42.357111235 +0000 UTC m=+892.998834850" observedRunningTime="2025-12-01 09:46:42.824007499 +0000 UTC m=+893.465731104" watchObservedRunningTime="2025-12-01 09:46:42.829172876 +0000 UTC m=+893.470896491" Dec 01 09:46:43 crc kubenswrapper[4933]: I1201 09:46:43.791138 4933 generic.go:334] "Generic (PLEG): container finished" podID="e9c7ed8e-3041-437c-a3f0-b8c2cf94c503" containerID="75ba1269413a8a231ac8c9f463b0606a401b03c19183e9a8014b53674e63b6df" exitCode=0 Dec 01 09:46:43 crc kubenswrapper[4933]: I1201 09:46:43.791243 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" event={"ID":"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503","Type":"ContainerDied","Data":"75ba1269413a8a231ac8c9f463b0606a401b03c19183e9a8014b53674e63b6df"} Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.595140 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zmkgq"] Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.597959 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.612484 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zmkgq"] Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.727563 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e93bba-0d52-44a0-96d0-dad3eb354b20-utilities\") pod \"community-operators-zmkgq\" (UID: \"42e93bba-0d52-44a0-96d0-dad3eb354b20\") " pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.727644 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hx4p\" (UniqueName: \"kubernetes.io/projected/42e93bba-0d52-44a0-96d0-dad3eb354b20-kube-api-access-9hx4p\") pod \"community-operators-zmkgq\" (UID: \"42e93bba-0d52-44a0-96d0-dad3eb354b20\") " pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.727715 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e93bba-0d52-44a0-96d0-dad3eb354b20-catalog-content\") pod \"community-operators-zmkgq\" (UID: \"42e93bba-0d52-44a0-96d0-dad3eb354b20\") " pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.802405 4933 generic.go:334] "Generic (PLEG): container finished" podID="e9c7ed8e-3041-437c-a3f0-b8c2cf94c503" containerID="43d2289d2fb5287bec049959c37b10dd80d6e92e2640ecaea3cd59f11b4ac512" exitCode=0 Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.802482 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" event={"ID":"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503","Type":"ContainerDied","Data":"43d2289d2fb5287bec049959c37b10dd80d6e92e2640ecaea3cd59f11b4ac512"} Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.829027 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e93bba-0d52-44a0-96d0-dad3eb354b20-utilities\") pod \"community-operators-zmkgq\" (UID: \"42e93bba-0d52-44a0-96d0-dad3eb354b20\") " pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.829078 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hx4p\" (UniqueName: \"kubernetes.io/projected/42e93bba-0d52-44a0-96d0-dad3eb354b20-kube-api-access-9hx4p\") pod \"community-operators-zmkgq\" (UID: \"42e93bba-0d52-44a0-96d0-dad3eb354b20\") " pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.829125 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e93bba-0d52-44a0-96d0-dad3eb354b20-catalog-content\") pod \"community-operators-zmkgq\" (UID: \"42e93bba-0d52-44a0-96d0-dad3eb354b20\") " pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.829652 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e93bba-0d52-44a0-96d0-dad3eb354b20-catalog-content\") pod \"community-operators-zmkgq\" (UID: \"42e93bba-0d52-44a0-96d0-dad3eb354b20\") " pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.829659 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e93bba-0d52-44a0-96d0-dad3eb354b20-utilities\") pod \"community-operators-zmkgq\" (UID: \"42e93bba-0d52-44a0-96d0-dad3eb354b20\") " pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.851891 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hx4p\" (UniqueName: \"kubernetes.io/projected/42e93bba-0d52-44a0-96d0-dad3eb354b20-kube-api-access-9hx4p\") pod \"community-operators-zmkgq\" (UID: \"42e93bba-0d52-44a0-96d0-dad3eb354b20\") " pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:44 crc kubenswrapper[4933]: I1201 09:46:44.921031 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:45 crc kubenswrapper[4933]: I1201 09:46:45.434066 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zmkgq"] Dec 01 09:46:45 crc kubenswrapper[4933]: I1201 09:46:45.812000 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmkgq" event={"ID":"42e93bba-0d52-44a0-96d0-dad3eb354b20","Type":"ContainerStarted","Data":"a061aa3afa495763acb94d96c3cbb25e50a3f7d89001ce50a8fee6d5ce5ff060"} Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.092728 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.203669 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-bundle\") pod \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\" (UID: \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\") " Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.203812 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-util\") pod \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\" (UID: \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\") " Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.203839 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgwz9\" (UniqueName: \"kubernetes.io/projected/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-kube-api-access-qgwz9\") pod \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\" (UID: \"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503\") " Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.204735 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-bundle" (OuterVolumeSpecName: "bundle") pod "e9c7ed8e-3041-437c-a3f0-b8c2cf94c503" (UID: "e9c7ed8e-3041-437c-a3f0-b8c2cf94c503"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.212700 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-kube-api-access-qgwz9" (OuterVolumeSpecName: "kube-api-access-qgwz9") pod "e9c7ed8e-3041-437c-a3f0-b8c2cf94c503" (UID: "e9c7ed8e-3041-437c-a3f0-b8c2cf94c503"). InnerVolumeSpecName "kube-api-access-qgwz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.217821 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-util" (OuterVolumeSpecName: "util") pod "e9c7ed8e-3041-437c-a3f0-b8c2cf94c503" (UID: "e9c7ed8e-3041-437c-a3f0-b8c2cf94c503"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.306020 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgwz9\" (UniqueName: \"kubernetes.io/projected/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-kube-api-access-qgwz9\") on node \"crc\" DevicePath \"\"" Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.306058 4933 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.306070 4933 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9c7ed8e-3041-437c-a3f0-b8c2cf94c503-util\") on node \"crc\" DevicePath \"\"" Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.819260 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" event={"ID":"e9c7ed8e-3041-437c-a3f0-b8c2cf94c503","Type":"ContainerDied","Data":"8bfa1d58da01091914eca45d4312283d287e4f0a4cb1d1f23f683fd0035e0f32"} Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.819585 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bfa1d58da01091914eca45d4312283d287e4f0a4cb1d1f23f683fd0035e0f32" Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.819362 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7" Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.821069 4933 generic.go:334] "Generic (PLEG): container finished" podID="42e93bba-0d52-44a0-96d0-dad3eb354b20" containerID="6c9720e77973fa29d839a56299fac90b8d6b0dfabae8280fdbdced69f81c2c50" exitCode=0 Dec 01 09:46:46 crc kubenswrapper[4933]: I1201 09:46:46.821099 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmkgq" event={"ID":"42e93bba-0d52-44a0-96d0-dad3eb354b20","Type":"ContainerDied","Data":"6c9720e77973fa29d839a56299fac90b8d6b0dfabae8280fdbdced69f81c2c50"} Dec 01 09:46:47 crc kubenswrapper[4933]: I1201 09:46:47.747608 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:47 crc kubenswrapper[4933]: I1201 09:46:47.747689 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:47 crc kubenswrapper[4933]: I1201 09:46:47.801280 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:47 crc kubenswrapper[4933]: I1201 09:46:47.831672 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmkgq" event={"ID":"42e93bba-0d52-44a0-96d0-dad3eb354b20","Type":"ContainerStarted","Data":"c1165f885a2882e40037b1aae5735792daea0dae0c203fb63065fd9b9ba273fa"} Dec 01 09:46:47 crc kubenswrapper[4933]: I1201 09:46:47.882027 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:48 crc kubenswrapper[4933]: I1201 09:46:48.841721 4933 generic.go:334] "Generic (PLEG): container finished" podID="42e93bba-0d52-44a0-96d0-dad3eb354b20" containerID="c1165f885a2882e40037b1aae5735792daea0dae0c203fb63065fd9b9ba273fa" exitCode=0 Dec 01 09:46:48 crc kubenswrapper[4933]: I1201 09:46:48.841773 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmkgq" event={"ID":"42e93bba-0d52-44a0-96d0-dad3eb354b20","Type":"ContainerDied","Data":"c1165f885a2882e40037b1aae5735792daea0dae0c203fb63065fd9b9ba273fa"} Dec 01 09:46:49 crc kubenswrapper[4933]: I1201 09:46:49.385133 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4khf"] Dec 01 09:46:49 crc kubenswrapper[4933]: I1201 09:46:49.852091 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmkgq" event={"ID":"42e93bba-0d52-44a0-96d0-dad3eb354b20","Type":"ContainerStarted","Data":"433439900ec9688fd4fe57711e6468dc98ffaba7289dfbecb59a3e0aac3cc51e"} Dec 01 09:46:49 crc kubenswrapper[4933]: I1201 09:46:49.852559 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d4khf" podUID="dcc6213c-dbcc-474b-af78-1218910b00d0" containerName="registry-server" containerID="cri-o://dc02eef1b4395beb9622323705d94c4fd4a93aa1a9f74d0f3972d0c79c98d66c" gracePeriod=2 Dec 01 09:46:49 crc kubenswrapper[4933]: I1201 09:46:49.882295 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zmkgq" podStartSLOduration=3.474410839 podStartE2EDuration="5.882278968s" podCreationTimestamp="2025-12-01 09:46:44 +0000 UTC" firstStartedPulling="2025-12-01 09:46:46.823489891 +0000 UTC m=+897.465213496" lastFinishedPulling="2025-12-01 09:46:49.23135801 +0000 UTC m=+899.873081625" observedRunningTime="2025-12-01 09:46:49.877770157 +0000 UTC m=+900.519493792" watchObservedRunningTime="2025-12-01 09:46:49.882278968 +0000 UTC m=+900.524002583" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.227574 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.363135 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc6213c-dbcc-474b-af78-1218910b00d0-catalog-content\") pod \"dcc6213c-dbcc-474b-af78-1218910b00d0\" (UID: \"dcc6213c-dbcc-474b-af78-1218910b00d0\") " Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.363291 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc6213c-dbcc-474b-af78-1218910b00d0-utilities\") pod \"dcc6213c-dbcc-474b-af78-1218910b00d0\" (UID: \"dcc6213c-dbcc-474b-af78-1218910b00d0\") " Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.363347 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz2xb\" (UniqueName: \"kubernetes.io/projected/dcc6213c-dbcc-474b-af78-1218910b00d0-kube-api-access-zz2xb\") pod \"dcc6213c-dbcc-474b-af78-1218910b00d0\" (UID: \"dcc6213c-dbcc-474b-af78-1218910b00d0\") " Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.364281 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcc6213c-dbcc-474b-af78-1218910b00d0-utilities" (OuterVolumeSpecName: "utilities") pod "dcc6213c-dbcc-474b-af78-1218910b00d0" (UID: "dcc6213c-dbcc-474b-af78-1218910b00d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.370866 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcc6213c-dbcc-474b-af78-1218910b00d0-kube-api-access-zz2xb" (OuterVolumeSpecName: "kube-api-access-zz2xb") pod "dcc6213c-dbcc-474b-af78-1218910b00d0" (UID: "dcc6213c-dbcc-474b-af78-1218910b00d0"). InnerVolumeSpecName "kube-api-access-zz2xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.383595 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcc6213c-dbcc-474b-af78-1218910b00d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcc6213c-dbcc-474b-af78-1218910b00d0" (UID: "dcc6213c-dbcc-474b-af78-1218910b00d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.465267 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc6213c-dbcc-474b-af78-1218910b00d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.465349 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz2xb\" (UniqueName: \"kubernetes.io/projected/dcc6213c-dbcc-474b-af78-1218910b00d0-kube-api-access-zz2xb\") on node \"crc\" DevicePath \"\"" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.465370 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc6213c-dbcc-474b-af78-1218910b00d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.862746 4933 generic.go:334] "Generic (PLEG): container finished" podID="dcc6213c-dbcc-474b-af78-1218910b00d0" containerID="dc02eef1b4395beb9622323705d94c4fd4a93aa1a9f74d0f3972d0c79c98d66c" exitCode=0 Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.863782 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4khf" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.870589 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4khf" event={"ID":"dcc6213c-dbcc-474b-af78-1218910b00d0","Type":"ContainerDied","Data":"dc02eef1b4395beb9622323705d94c4fd4a93aa1a9f74d0f3972d0c79c98d66c"} Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.870711 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4khf" event={"ID":"dcc6213c-dbcc-474b-af78-1218910b00d0","Type":"ContainerDied","Data":"dd43d4ed9c5637fb6bb53c06dc2278b52b3eef00622fdeb12cf1d37603aa75ee"} Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.870776 4933 scope.go:117] "RemoveContainer" containerID="dc02eef1b4395beb9622323705d94c4fd4a93aa1a9f74d0f3972d0c79c98d66c" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.889271 4933 scope.go:117] "RemoveContainer" containerID="df94e9e8c6ebe778fedb6607b3bdf1dba62fc5b115a6d4ae407c10163ece0ef6" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.904615 4933 scope.go:117] "RemoveContainer" containerID="cd9c4c747d2f4103e98c86bf9927d681d1bdf269905c3ded2095cbcc0a656e57" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.913996 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4khf"] Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.917643 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4khf"] Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.931871 4933 scope.go:117] "RemoveContainer" containerID="dc02eef1b4395beb9622323705d94c4fd4a93aa1a9f74d0f3972d0c79c98d66c" Dec 01 09:46:50 crc kubenswrapper[4933]: E1201 09:46:50.932367 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc02eef1b4395beb9622323705d94c4fd4a93aa1a9f74d0f3972d0c79c98d66c\": container with ID starting with dc02eef1b4395beb9622323705d94c4fd4a93aa1a9f74d0f3972d0c79c98d66c not found: ID does not exist" containerID="dc02eef1b4395beb9622323705d94c4fd4a93aa1a9f74d0f3972d0c79c98d66c" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.932463 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc02eef1b4395beb9622323705d94c4fd4a93aa1a9f74d0f3972d0c79c98d66c"} err="failed to get container status \"dc02eef1b4395beb9622323705d94c4fd4a93aa1a9f74d0f3972d0c79c98d66c\": rpc error: code = NotFound desc = could not find container \"dc02eef1b4395beb9622323705d94c4fd4a93aa1a9f74d0f3972d0c79c98d66c\": container with ID starting with dc02eef1b4395beb9622323705d94c4fd4a93aa1a9f74d0f3972d0c79c98d66c not found: ID does not exist" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.932564 4933 scope.go:117] "RemoveContainer" containerID="df94e9e8c6ebe778fedb6607b3bdf1dba62fc5b115a6d4ae407c10163ece0ef6" Dec 01 09:46:50 crc kubenswrapper[4933]: E1201 09:46:50.932796 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df94e9e8c6ebe778fedb6607b3bdf1dba62fc5b115a6d4ae407c10163ece0ef6\": container with ID starting with df94e9e8c6ebe778fedb6607b3bdf1dba62fc5b115a6d4ae407c10163ece0ef6 not found: ID does not exist" containerID="df94e9e8c6ebe778fedb6607b3bdf1dba62fc5b115a6d4ae407c10163ece0ef6" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.932872 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df94e9e8c6ebe778fedb6607b3bdf1dba62fc5b115a6d4ae407c10163ece0ef6"} err="failed to get container status \"df94e9e8c6ebe778fedb6607b3bdf1dba62fc5b115a6d4ae407c10163ece0ef6\": rpc error: code = NotFound desc = could not find container \"df94e9e8c6ebe778fedb6607b3bdf1dba62fc5b115a6d4ae407c10163ece0ef6\": container with ID starting with df94e9e8c6ebe778fedb6607b3bdf1dba62fc5b115a6d4ae407c10163ece0ef6 not found: ID does not exist" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.932946 4933 scope.go:117] "RemoveContainer" containerID="cd9c4c747d2f4103e98c86bf9927d681d1bdf269905c3ded2095cbcc0a656e57" Dec 01 09:46:50 crc kubenswrapper[4933]: E1201 09:46:50.933173 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd9c4c747d2f4103e98c86bf9927d681d1bdf269905c3ded2095cbcc0a656e57\": container with ID starting with cd9c4c747d2f4103e98c86bf9927d681d1bdf269905c3ded2095cbcc0a656e57 not found: ID does not exist" containerID="cd9c4c747d2f4103e98c86bf9927d681d1bdf269905c3ded2095cbcc0a656e57" Dec 01 09:46:50 crc kubenswrapper[4933]: I1201 09:46:50.933260 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd9c4c747d2f4103e98c86bf9927d681d1bdf269905c3ded2095cbcc0a656e57"} err="failed to get container status \"cd9c4c747d2f4103e98c86bf9927d681d1bdf269905c3ded2095cbcc0a656e57\": rpc error: code = NotFound desc = could not find container \"cd9c4c747d2f4103e98c86bf9927d681d1bdf269905c3ded2095cbcc0a656e57\": container with ID starting with cd9c4c747d2f4103e98c86bf9927d681d1bdf269905c3ded2095cbcc0a656e57 not found: ID does not exist" Dec 01 09:46:51 crc kubenswrapper[4933]: I1201 09:46:51.674824 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcc6213c-dbcc-474b-af78-1218910b00d0" path="/var/lib/kubelet/pods/dcc6213c-dbcc-474b-af78-1218910b00d0/volumes" Dec 01 09:46:51 crc kubenswrapper[4933]: I1201 09:46:51.969355 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-66ff97f68c-jqgr4"] Dec 01 09:46:51 crc kubenswrapper[4933]: E1201 09:46:51.969606 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c7ed8e-3041-437c-a3f0-b8c2cf94c503" containerName="extract" Dec 01 09:46:51 crc kubenswrapper[4933]: I1201 09:46:51.969618 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c7ed8e-3041-437c-a3f0-b8c2cf94c503" containerName="extract" Dec 01 09:46:51 crc kubenswrapper[4933]: E1201 09:46:51.969631 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc6213c-dbcc-474b-af78-1218910b00d0" containerName="registry-server" Dec 01 09:46:51 crc kubenswrapper[4933]: I1201 09:46:51.969637 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc6213c-dbcc-474b-af78-1218910b00d0" containerName="registry-server" Dec 01 09:46:51 crc kubenswrapper[4933]: E1201 09:46:51.969646 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c7ed8e-3041-437c-a3f0-b8c2cf94c503" containerName="pull" Dec 01 09:46:51 crc kubenswrapper[4933]: I1201 09:46:51.969655 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c7ed8e-3041-437c-a3f0-b8c2cf94c503" containerName="pull" Dec 01 09:46:51 crc kubenswrapper[4933]: E1201 09:46:51.969674 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc6213c-dbcc-474b-af78-1218910b00d0" containerName="extract-content" Dec 01 09:46:51 crc kubenswrapper[4933]: I1201 09:46:51.969682 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc6213c-dbcc-474b-af78-1218910b00d0" containerName="extract-content" Dec 01 09:46:51 crc kubenswrapper[4933]: E1201 09:46:51.969698 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc6213c-dbcc-474b-af78-1218910b00d0" containerName="extract-utilities" Dec 01 09:46:51 crc kubenswrapper[4933]: I1201 09:46:51.969706 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc6213c-dbcc-474b-af78-1218910b00d0" containerName="extract-utilities" Dec 01 09:46:51 crc kubenswrapper[4933]: E1201 09:46:51.969715 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c7ed8e-3041-437c-a3f0-b8c2cf94c503" containerName="util" Dec 01 09:46:51 crc kubenswrapper[4933]: I1201 09:46:51.969721 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c7ed8e-3041-437c-a3f0-b8c2cf94c503" containerName="util" Dec 01 09:46:51 crc kubenswrapper[4933]: I1201 09:46:51.969822 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcc6213c-dbcc-474b-af78-1218910b00d0" containerName="registry-server" Dec 01 09:46:51 crc kubenswrapper[4933]: I1201 09:46:51.969839 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c7ed8e-3041-437c-a3f0-b8c2cf94c503" containerName="extract" Dec 01 09:46:51 crc kubenswrapper[4933]: I1201 09:46:51.970380 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-66ff97f68c-jqgr4" Dec 01 09:46:51 crc kubenswrapper[4933]: I1201 09:46:51.972838 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-9tbc4" Dec 01 09:46:51 crc kubenswrapper[4933]: I1201 09:46:51.987105 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwsl8\" (UniqueName: \"kubernetes.io/projected/2a175cf5-68b0-46ab-9e64-646af044da97-kube-api-access-nwsl8\") pod \"openstack-operator-controller-operator-66ff97f68c-jqgr4\" (UID: \"2a175cf5-68b0-46ab-9e64-646af044da97\") " pod="openstack-operators/openstack-operator-controller-operator-66ff97f68c-jqgr4" Dec 01 09:46:52 crc kubenswrapper[4933]: I1201 09:46:52.005491 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-66ff97f68c-jqgr4"] Dec 01 09:46:52 crc kubenswrapper[4933]: I1201 09:46:52.088570 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwsl8\" (UniqueName: \"kubernetes.io/projected/2a175cf5-68b0-46ab-9e64-646af044da97-kube-api-access-nwsl8\") pod \"openstack-operator-controller-operator-66ff97f68c-jqgr4\" (UID: \"2a175cf5-68b0-46ab-9e64-646af044da97\") " pod="openstack-operators/openstack-operator-controller-operator-66ff97f68c-jqgr4" Dec 01 09:46:52 crc kubenswrapper[4933]: I1201 09:46:52.118858 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwsl8\" (UniqueName: \"kubernetes.io/projected/2a175cf5-68b0-46ab-9e64-646af044da97-kube-api-access-nwsl8\") pod \"openstack-operator-controller-operator-66ff97f68c-jqgr4\" (UID: \"2a175cf5-68b0-46ab-9e64-646af044da97\") " pod="openstack-operators/openstack-operator-controller-operator-66ff97f68c-jqgr4" Dec 01 09:46:52 crc kubenswrapper[4933]: I1201 09:46:52.290674 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-66ff97f68c-jqgr4" Dec 01 09:46:52 crc kubenswrapper[4933]: I1201 09:46:52.736932 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-66ff97f68c-jqgr4"] Dec 01 09:46:52 crc kubenswrapper[4933]: W1201 09:46:52.742146 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a175cf5_68b0_46ab_9e64_646af044da97.slice/crio-a30d26aa402057bcaee1f949c819a84a285585c78b124f512b6d29368d8bfaa0 WatchSource:0}: Error finding container a30d26aa402057bcaee1f949c819a84a285585c78b124f512b6d29368d8bfaa0: Status 404 returned error can't find the container with id a30d26aa402057bcaee1f949c819a84a285585c78b124f512b6d29368d8bfaa0 Dec 01 09:46:52 crc kubenswrapper[4933]: I1201 09:46:52.880896 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-66ff97f68c-jqgr4" event={"ID":"2a175cf5-68b0-46ab-9e64-646af044da97","Type":"ContainerStarted","Data":"a30d26aa402057bcaee1f949c819a84a285585c78b124f512b6d29368d8bfaa0"} Dec 01 09:46:54 crc kubenswrapper[4933]: I1201 09:46:54.922613 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:54 crc kubenswrapper[4933]: I1201 09:46:54.923109 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:54 crc kubenswrapper[4933]: I1201 09:46:54.989380 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:55 crc kubenswrapper[4933]: I1201 09:46:55.200171 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vhm29"] Dec 01 09:46:55 crc kubenswrapper[4933]: I1201 09:46:55.202139 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:46:55 crc kubenswrapper[4933]: I1201 09:46:55.214297 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vhm29"] Dec 01 09:46:55 crc kubenswrapper[4933]: I1201 09:46:55.313044 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64493c13-c6fd-47f2-a910-4138fd29c0b2-catalog-content\") pod \"certified-operators-vhm29\" (UID: \"64493c13-c6fd-47f2-a910-4138fd29c0b2\") " pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:46:55 crc kubenswrapper[4933]: I1201 09:46:55.313513 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64493c13-c6fd-47f2-a910-4138fd29c0b2-utilities\") pod \"certified-operators-vhm29\" (UID: \"64493c13-c6fd-47f2-a910-4138fd29c0b2\") " pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:46:55 crc kubenswrapper[4933]: I1201 09:46:55.313691 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74hfw\" (UniqueName: \"kubernetes.io/projected/64493c13-c6fd-47f2-a910-4138fd29c0b2-kube-api-access-74hfw\") pod \"certified-operators-vhm29\" (UID: \"64493c13-c6fd-47f2-a910-4138fd29c0b2\") " pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:46:55 crc kubenswrapper[4933]: I1201 09:46:55.414960 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74hfw\" (UniqueName: \"kubernetes.io/projected/64493c13-c6fd-47f2-a910-4138fd29c0b2-kube-api-access-74hfw\") pod \"certified-operators-vhm29\" (UID: \"64493c13-c6fd-47f2-a910-4138fd29c0b2\") " pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:46:55 crc kubenswrapper[4933]: I1201 09:46:55.415081 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64493c13-c6fd-47f2-a910-4138fd29c0b2-catalog-content\") pod \"certified-operators-vhm29\" (UID: \"64493c13-c6fd-47f2-a910-4138fd29c0b2\") " pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:46:55 crc kubenswrapper[4933]: I1201 09:46:55.415121 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64493c13-c6fd-47f2-a910-4138fd29c0b2-utilities\") pod \"certified-operators-vhm29\" (UID: \"64493c13-c6fd-47f2-a910-4138fd29c0b2\") " pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:46:55 crc kubenswrapper[4933]: I1201 09:46:55.415786 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64493c13-c6fd-47f2-a910-4138fd29c0b2-utilities\") pod \"certified-operators-vhm29\" (UID: \"64493c13-c6fd-47f2-a910-4138fd29c0b2\") " pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:46:55 crc kubenswrapper[4933]: I1201 09:46:55.415984 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64493c13-c6fd-47f2-a910-4138fd29c0b2-catalog-content\") pod \"certified-operators-vhm29\" (UID: \"64493c13-c6fd-47f2-a910-4138fd29c0b2\") " pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:46:55 crc kubenswrapper[4933]: I1201 09:46:55.440931 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74hfw\" (UniqueName: \"kubernetes.io/projected/64493c13-c6fd-47f2-a910-4138fd29c0b2-kube-api-access-74hfw\") pod \"certified-operators-vhm29\" (UID: \"64493c13-c6fd-47f2-a910-4138fd29c0b2\") " pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:46:55 crc kubenswrapper[4933]: I1201 09:46:55.542862 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:46:56 crc kubenswrapper[4933]: I1201 09:46:56.202194 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:58 crc kubenswrapper[4933]: I1201 09:46:58.984498 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zmkgq"] Dec 01 09:46:58 crc kubenswrapper[4933]: I1201 09:46:58.985138 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zmkgq" podUID="42e93bba-0d52-44a0-96d0-dad3eb354b20" containerName="registry-server" containerID="cri-o://433439900ec9688fd4fe57711e6468dc98ffaba7289dfbecb59a3e0aac3cc51e" gracePeriod=2 Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.550615 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.610859 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e93bba-0d52-44a0-96d0-dad3eb354b20-utilities\") pod \"42e93bba-0d52-44a0-96d0-dad3eb354b20\" (UID: \"42e93bba-0d52-44a0-96d0-dad3eb354b20\") " Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.610944 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hx4p\" (UniqueName: \"kubernetes.io/projected/42e93bba-0d52-44a0-96d0-dad3eb354b20-kube-api-access-9hx4p\") pod \"42e93bba-0d52-44a0-96d0-dad3eb354b20\" (UID: \"42e93bba-0d52-44a0-96d0-dad3eb354b20\") " Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.611020 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e93bba-0d52-44a0-96d0-dad3eb354b20-catalog-content\") pod \"42e93bba-0d52-44a0-96d0-dad3eb354b20\" (UID: \"42e93bba-0d52-44a0-96d0-dad3eb354b20\") " Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.612398 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42e93bba-0d52-44a0-96d0-dad3eb354b20-utilities" (OuterVolumeSpecName: "utilities") pod "42e93bba-0d52-44a0-96d0-dad3eb354b20" (UID: "42e93bba-0d52-44a0-96d0-dad3eb354b20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.620964 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e93bba-0d52-44a0-96d0-dad3eb354b20-kube-api-access-9hx4p" (OuterVolumeSpecName: "kube-api-access-9hx4p") pod "42e93bba-0d52-44a0-96d0-dad3eb354b20" (UID: "42e93bba-0d52-44a0-96d0-dad3eb354b20"). InnerVolumeSpecName "kube-api-access-9hx4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.668217 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42e93bba-0d52-44a0-96d0-dad3eb354b20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42e93bba-0d52-44a0-96d0-dad3eb354b20" (UID: "42e93bba-0d52-44a0-96d0-dad3eb354b20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.712213 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hx4p\" (UniqueName: \"kubernetes.io/projected/42e93bba-0d52-44a0-96d0-dad3eb354b20-kube-api-access-9hx4p\") on node \"crc\" DevicePath \"\"" Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.712802 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e93bba-0d52-44a0-96d0-dad3eb354b20-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.712814 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e93bba-0d52-44a0-96d0-dad3eb354b20-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.718595 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vhm29"] Dec 01 09:46:59 crc kubenswrapper[4933]: W1201 09:46:59.723908 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64493c13_c6fd_47f2_a910_4138fd29c0b2.slice/crio-33daa2d3d22049c2730941983b1780119729e960a595c67dce382cd679196005 WatchSource:0}: Error finding container 33daa2d3d22049c2730941983b1780119729e960a595c67dce382cd679196005: Status 404 returned error can't find the container with id 33daa2d3d22049c2730941983b1780119729e960a595c67dce382cd679196005 Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.992102 4933 generic.go:334] "Generic (PLEG): container finished" podID="42e93bba-0d52-44a0-96d0-dad3eb354b20" containerID="433439900ec9688fd4fe57711e6468dc98ffaba7289dfbecb59a3e0aac3cc51e" exitCode=0 Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.992161 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zmkgq" Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.992174 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmkgq" event={"ID":"42e93bba-0d52-44a0-96d0-dad3eb354b20","Type":"ContainerDied","Data":"433439900ec9688fd4fe57711e6468dc98ffaba7289dfbecb59a3e0aac3cc51e"} Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.992203 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmkgq" event={"ID":"42e93bba-0d52-44a0-96d0-dad3eb354b20","Type":"ContainerDied","Data":"a061aa3afa495763acb94d96c3cbb25e50a3f7d89001ce50a8fee6d5ce5ff060"} Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.992221 4933 scope.go:117] "RemoveContainer" containerID="433439900ec9688fd4fe57711e6468dc98ffaba7289dfbecb59a3e0aac3cc51e" Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.994586 4933 generic.go:334] "Generic (PLEG): container finished" podID="64493c13-c6fd-47f2-a910-4138fd29c0b2" containerID="2e15959fb377d3e2dfe053ce9902f0d37c4dc65fb658d267bfba68e15a451850" exitCode=0 Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.994656 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhm29" event={"ID":"64493c13-c6fd-47f2-a910-4138fd29c0b2","Type":"ContainerDied","Data":"2e15959fb377d3e2dfe053ce9902f0d37c4dc65fb658d267bfba68e15a451850"} Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.994678 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhm29" event={"ID":"64493c13-c6fd-47f2-a910-4138fd29c0b2","Type":"ContainerStarted","Data":"33daa2d3d22049c2730941983b1780119729e960a595c67dce382cd679196005"} Dec 01 09:46:59 crc kubenswrapper[4933]: I1201 09:46:59.998939 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-66ff97f68c-jqgr4" event={"ID":"2a175cf5-68b0-46ab-9e64-646af044da97","Type":"ContainerStarted","Data":"845d803f3ae28c607c6cf975839b1f2b7fc76aa47aa2c51742db1ca8ff974594"} Dec 01 09:47:00 crc kubenswrapper[4933]: I1201 09:47:00.001535 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-66ff97f68c-jqgr4" Dec 01 09:47:00 crc kubenswrapper[4933]: I1201 09:47:00.012389 4933 scope.go:117] "RemoveContainer" containerID="c1165f885a2882e40037b1aae5735792daea0dae0c203fb63065fd9b9ba273fa" Dec 01 09:47:00 crc kubenswrapper[4933]: I1201 09:47:00.039775 4933 scope.go:117] "RemoveContainer" containerID="6c9720e77973fa29d839a56299fac90b8d6b0dfabae8280fdbdced69f81c2c50" Dec 01 09:47:00 crc kubenswrapper[4933]: I1201 09:47:00.055898 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-66ff97f68c-jqgr4" podStartSLOduration=2.220479396 podStartE2EDuration="9.055869779s" podCreationTimestamp="2025-12-01 09:46:51 +0000 UTC" firstStartedPulling="2025-12-01 09:46:52.745568486 +0000 UTC m=+903.387292101" lastFinishedPulling="2025-12-01 09:46:59.580958869 +0000 UTC m=+910.222682484" observedRunningTime="2025-12-01 09:47:00.044202283 +0000 UTC m=+910.685925928" watchObservedRunningTime="2025-12-01 09:47:00.055869779 +0000 UTC m=+910.697593404" Dec 01 09:47:00 crc kubenswrapper[4933]: I1201 09:47:00.061593 4933 scope.go:117] "RemoveContainer" containerID="433439900ec9688fd4fe57711e6468dc98ffaba7289dfbecb59a3e0aac3cc51e" Dec 01 09:47:00 crc kubenswrapper[4933]: E1201 09:47:00.062294 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433439900ec9688fd4fe57711e6468dc98ffaba7289dfbecb59a3e0aac3cc51e\": container with ID starting with 433439900ec9688fd4fe57711e6468dc98ffaba7289dfbecb59a3e0aac3cc51e not found: ID does not exist" containerID="433439900ec9688fd4fe57711e6468dc98ffaba7289dfbecb59a3e0aac3cc51e" Dec 01 09:47:00 crc kubenswrapper[4933]: I1201 09:47:00.062360 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433439900ec9688fd4fe57711e6468dc98ffaba7289dfbecb59a3e0aac3cc51e"} err="failed to get container status \"433439900ec9688fd4fe57711e6468dc98ffaba7289dfbecb59a3e0aac3cc51e\": rpc error: code = NotFound desc = could not find container \"433439900ec9688fd4fe57711e6468dc98ffaba7289dfbecb59a3e0aac3cc51e\": container with ID starting with 433439900ec9688fd4fe57711e6468dc98ffaba7289dfbecb59a3e0aac3cc51e not found: ID does not exist" Dec 01 09:47:00 crc kubenswrapper[4933]: I1201 09:47:00.062431 4933 scope.go:117] "RemoveContainer" containerID="c1165f885a2882e40037b1aae5735792daea0dae0c203fb63065fd9b9ba273fa" Dec 01 09:47:00 crc kubenswrapper[4933]: E1201 09:47:00.062840 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1165f885a2882e40037b1aae5735792daea0dae0c203fb63065fd9b9ba273fa\": container with ID starting with c1165f885a2882e40037b1aae5735792daea0dae0c203fb63065fd9b9ba273fa not found: ID does not exist" containerID="c1165f885a2882e40037b1aae5735792daea0dae0c203fb63065fd9b9ba273fa" Dec 01 09:47:00 crc kubenswrapper[4933]: I1201 09:47:00.062899 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1165f885a2882e40037b1aae5735792daea0dae0c203fb63065fd9b9ba273fa"} err="failed to get container status \"c1165f885a2882e40037b1aae5735792daea0dae0c203fb63065fd9b9ba273fa\": rpc error: code = NotFound desc = could not find container \"c1165f885a2882e40037b1aae5735792daea0dae0c203fb63065fd9b9ba273fa\": container with ID starting with c1165f885a2882e40037b1aae5735792daea0dae0c203fb63065fd9b9ba273fa not found: ID does not exist" Dec 01 09:47:00 crc kubenswrapper[4933]: I1201 09:47:00.062934 4933 scope.go:117] "RemoveContainer" containerID="6c9720e77973fa29d839a56299fac90b8d6b0dfabae8280fdbdced69f81c2c50" Dec 01 09:47:00 crc kubenswrapper[4933]: E1201 09:47:00.063445 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c9720e77973fa29d839a56299fac90b8d6b0dfabae8280fdbdced69f81c2c50\": container with ID starting with 6c9720e77973fa29d839a56299fac90b8d6b0dfabae8280fdbdced69f81c2c50 not found: ID does not exist" containerID="6c9720e77973fa29d839a56299fac90b8d6b0dfabae8280fdbdced69f81c2c50" Dec 01 09:47:00 crc kubenswrapper[4933]: I1201 09:47:00.063475 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c9720e77973fa29d839a56299fac90b8d6b0dfabae8280fdbdced69f81c2c50"} err="failed to get container status \"6c9720e77973fa29d839a56299fac90b8d6b0dfabae8280fdbdced69f81c2c50\": rpc error: code = NotFound desc = could not find container \"6c9720e77973fa29d839a56299fac90b8d6b0dfabae8280fdbdced69f81c2c50\": container with ID starting with 6c9720e77973fa29d839a56299fac90b8d6b0dfabae8280fdbdced69f81c2c50 not found: ID does not exist" Dec 01 09:47:00 crc kubenswrapper[4933]: I1201 09:47:00.067819 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zmkgq"] Dec 01 09:47:00 crc kubenswrapper[4933]: I1201 09:47:00.073901 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zmkgq"] Dec 01 09:47:01 crc kubenswrapper[4933]: I1201 09:47:01.008553 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhm29" event={"ID":"64493c13-c6fd-47f2-a910-4138fd29c0b2","Type":"ContainerStarted","Data":"2d65e4c7da3f8e02803dfc6ba2f97327f18006ed4e284d130f67a7e75c730a66"} Dec 01 09:47:01 crc kubenswrapper[4933]: I1201 09:47:01.677607 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e93bba-0d52-44a0-96d0-dad3eb354b20" path="/var/lib/kubelet/pods/42e93bba-0d52-44a0-96d0-dad3eb354b20/volumes" Dec 01 09:47:02 crc kubenswrapper[4933]: I1201 09:47:02.026583 4933 generic.go:334] "Generic (PLEG): container finished" podID="64493c13-c6fd-47f2-a910-4138fd29c0b2" containerID="2d65e4c7da3f8e02803dfc6ba2f97327f18006ed4e284d130f67a7e75c730a66" exitCode=0 Dec 01 09:47:02 crc kubenswrapper[4933]: I1201 09:47:02.026664 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhm29" event={"ID":"64493c13-c6fd-47f2-a910-4138fd29c0b2","Type":"ContainerDied","Data":"2d65e4c7da3f8e02803dfc6ba2f97327f18006ed4e284d130f67a7e75c730a66"} Dec 01 09:47:03 crc kubenswrapper[4933]: I1201 09:47:03.036107 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhm29" event={"ID":"64493c13-c6fd-47f2-a910-4138fd29c0b2","Type":"ContainerStarted","Data":"edf437ca09ff31bd8bfa5a6e0fa22c74835cde5c3fcd92ca5a933647382a6ea9"} Dec 01 09:47:03 crc kubenswrapper[4933]: I1201 09:47:03.059962 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vhm29" podStartSLOduration=5.342133729 podStartE2EDuration="8.059944484s" podCreationTimestamp="2025-12-01 09:46:55 +0000 UTC" firstStartedPulling="2025-12-01 09:46:59.996192387 +0000 UTC m=+910.637916002" lastFinishedPulling="2025-12-01 09:47:02.714003142 +0000 UTC m=+913.355726757" observedRunningTime="2025-12-01 09:47:03.056216063 +0000 UTC m=+913.697939678" watchObservedRunningTime="2025-12-01 09:47:03.059944484 +0000 UTC m=+913.701668099" Dec 01 09:47:05 crc kubenswrapper[4933]: I1201 09:47:05.543368 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:47:05 crc kubenswrapper[4933]: I1201 09:47:05.543803 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:47:05 crc kubenswrapper[4933]: I1201 09:47:05.584990 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:47:12 crc kubenswrapper[4933]: I1201 09:47:12.296186 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-66ff97f68c-jqgr4" Dec 01 09:47:15 crc kubenswrapper[4933]: I1201 09:47:15.612765 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:47:18 crc kubenswrapper[4933]: I1201 09:47:18.585895 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vhm29"] Dec 01 09:47:18 crc kubenswrapper[4933]: I1201 09:47:18.586258 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vhm29" podUID="64493c13-c6fd-47f2-a910-4138fd29c0b2" containerName="registry-server" containerID="cri-o://edf437ca09ff31bd8bfa5a6e0fa22c74835cde5c3fcd92ca5a933647382a6ea9" gracePeriod=2 Dec 01 09:47:21 crc kubenswrapper[4933]: I1201 09:47:21.184026 4933 generic.go:334] "Generic (PLEG): container finished" podID="64493c13-c6fd-47f2-a910-4138fd29c0b2" containerID="edf437ca09ff31bd8bfa5a6e0fa22c74835cde5c3fcd92ca5a933647382a6ea9" exitCode=0 Dec 01 09:47:21 crc kubenswrapper[4933]: I1201 09:47:21.184106 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhm29" event={"ID":"64493c13-c6fd-47f2-a910-4138fd29c0b2","Type":"ContainerDied","Data":"edf437ca09ff31bd8bfa5a6e0fa22c74835cde5c3fcd92ca5a933647382a6ea9"} Dec 01 09:47:21 crc kubenswrapper[4933]: I1201 09:47:21.723503 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:47:21 crc kubenswrapper[4933]: I1201 09:47:21.839074 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74hfw\" (UniqueName: \"kubernetes.io/projected/64493c13-c6fd-47f2-a910-4138fd29c0b2-kube-api-access-74hfw\") pod \"64493c13-c6fd-47f2-a910-4138fd29c0b2\" (UID: \"64493c13-c6fd-47f2-a910-4138fd29c0b2\") " Dec 01 09:47:21 crc kubenswrapper[4933]: I1201 09:47:21.839219 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64493c13-c6fd-47f2-a910-4138fd29c0b2-catalog-content\") pod \"64493c13-c6fd-47f2-a910-4138fd29c0b2\" (UID: \"64493c13-c6fd-47f2-a910-4138fd29c0b2\") " Dec 01 09:47:21 crc kubenswrapper[4933]: I1201 09:47:21.839472 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64493c13-c6fd-47f2-a910-4138fd29c0b2-utilities\") pod \"64493c13-c6fd-47f2-a910-4138fd29c0b2\" (UID: \"64493c13-c6fd-47f2-a910-4138fd29c0b2\") " Dec 01 09:47:21 crc kubenswrapper[4933]: I1201 09:47:21.840972 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64493c13-c6fd-47f2-a910-4138fd29c0b2-utilities" (OuterVolumeSpecName: "utilities") pod "64493c13-c6fd-47f2-a910-4138fd29c0b2" (UID: "64493c13-c6fd-47f2-a910-4138fd29c0b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:47:21 crc kubenswrapper[4933]: I1201 09:47:21.847943 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64493c13-c6fd-47f2-a910-4138fd29c0b2-kube-api-access-74hfw" (OuterVolumeSpecName: "kube-api-access-74hfw") pod "64493c13-c6fd-47f2-a910-4138fd29c0b2" (UID: "64493c13-c6fd-47f2-a910-4138fd29c0b2"). InnerVolumeSpecName "kube-api-access-74hfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:47:21 crc kubenswrapper[4933]: I1201 09:47:21.898140 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64493c13-c6fd-47f2-a910-4138fd29c0b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64493c13-c6fd-47f2-a910-4138fd29c0b2" (UID: "64493c13-c6fd-47f2-a910-4138fd29c0b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:47:21 crc kubenswrapper[4933]: I1201 09:47:21.942236 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64493c13-c6fd-47f2-a910-4138fd29c0b2-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:47:21 crc kubenswrapper[4933]: I1201 09:47:21.942803 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74hfw\" (UniqueName: \"kubernetes.io/projected/64493c13-c6fd-47f2-a910-4138fd29c0b2-kube-api-access-74hfw\") on node \"crc\" DevicePath \"\"" Dec 01 09:47:21 crc kubenswrapper[4933]: I1201 09:47:21.942828 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64493c13-c6fd-47f2-a910-4138fd29c0b2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:47:22 crc kubenswrapper[4933]: I1201 09:47:22.202961 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhm29" event={"ID":"64493c13-c6fd-47f2-a910-4138fd29c0b2","Type":"ContainerDied","Data":"33daa2d3d22049c2730941983b1780119729e960a595c67dce382cd679196005"} Dec 01 09:47:22 crc kubenswrapper[4933]: I1201 09:47:22.203032 4933 scope.go:117] "RemoveContainer" containerID="edf437ca09ff31bd8bfa5a6e0fa22c74835cde5c3fcd92ca5a933647382a6ea9" Dec 01 09:47:22 crc kubenswrapper[4933]: I1201 09:47:22.203242 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhm29" Dec 01 09:47:22 crc kubenswrapper[4933]: I1201 09:47:22.230323 4933 scope.go:117] "RemoveContainer" containerID="2d65e4c7da3f8e02803dfc6ba2f97327f18006ed4e284d130f67a7e75c730a66" Dec 01 09:47:22 crc kubenswrapper[4933]: I1201 09:47:22.238811 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vhm29"] Dec 01 09:47:22 crc kubenswrapper[4933]: I1201 09:47:22.243231 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vhm29"] Dec 01 09:47:22 crc kubenswrapper[4933]: I1201 09:47:22.254211 4933 scope.go:117] "RemoveContainer" containerID="2e15959fb377d3e2dfe053ce9902f0d37c4dc65fb658d267bfba68e15a451850" Dec 01 09:47:23 crc kubenswrapper[4933]: I1201 09:47:23.685699 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64493c13-c6fd-47f2-a910-4138fd29c0b2" path="/var/lib/kubelet/pods/64493c13-c6fd-47f2-a910-4138fd29c0b2/volumes" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.474525 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7"] Dec 01 09:47:41 crc kubenswrapper[4933]: E1201 09:47:41.475580 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e93bba-0d52-44a0-96d0-dad3eb354b20" containerName="registry-server" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.475602 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e93bba-0d52-44a0-96d0-dad3eb354b20" containerName="registry-server" Dec 01 09:47:41 crc kubenswrapper[4933]: E1201 09:47:41.475624 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e93bba-0d52-44a0-96d0-dad3eb354b20" containerName="extract-utilities" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.475632 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e93bba-0d52-44a0-96d0-dad3eb354b20" containerName="extract-utilities" Dec 01 09:47:41 crc kubenswrapper[4933]: E1201 09:47:41.475644 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64493c13-c6fd-47f2-a910-4138fd29c0b2" containerName="extract-content" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.475652 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="64493c13-c6fd-47f2-a910-4138fd29c0b2" containerName="extract-content" Dec 01 09:47:41 crc kubenswrapper[4933]: E1201 09:47:41.475663 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64493c13-c6fd-47f2-a910-4138fd29c0b2" containerName="registry-server" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.475670 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="64493c13-c6fd-47f2-a910-4138fd29c0b2" containerName="registry-server" Dec 01 09:47:41 crc kubenswrapper[4933]: E1201 09:47:41.475688 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64493c13-c6fd-47f2-a910-4138fd29c0b2" containerName="extract-utilities" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.475695 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="64493c13-c6fd-47f2-a910-4138fd29c0b2" containerName="extract-utilities" Dec 01 09:47:41 crc kubenswrapper[4933]: E1201 09:47:41.475707 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e93bba-0d52-44a0-96d0-dad3eb354b20" containerName="extract-content" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.475717 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e93bba-0d52-44a0-96d0-dad3eb354b20" containerName="extract-content" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.475888 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e93bba-0d52-44a0-96d0-dad3eb354b20" containerName="registry-server" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.475907 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="64493c13-c6fd-47f2-a910-4138fd29c0b2" containerName="registry-server" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.476870 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.479787 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.480575 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wrdwp" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.481608 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.483987 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-fbzzn" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.490895 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.492392 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.497786 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-gsdhm" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.500109 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.505431 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.522964 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.524279 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.527760 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6ptf2" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.542623 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.547758 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.565650 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.567229 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.569370 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mhjh2" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.584156 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.586575 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.594153 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-fvq8d" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.612738 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.615034 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rww7s\" (UniqueName: \"kubernetes.io/projected/e1f14086-5509-48fe-a88c-c2717009ef93-kube-api-access-rww7s\") pod \"barbican-operator-controller-manager-7d9dfd778-9fvkr\" (UID: \"e1f14086-5509-48fe-a88c-c2717009ef93\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.615135 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbshw\" (UniqueName: \"kubernetes.io/projected/19b19877-3b1b-40f9-9501-329bceb4756a-kube-api-access-lbshw\") pod \"cinder-operator-controller-manager-859b6ccc6-fntw7\" (UID: \"19b19877-3b1b-40f9-9501-329bceb4756a\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.615165 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb9xn\" (UniqueName: \"kubernetes.io/projected/9c52b072-b528-4fee-88b8-c878150882b1-kube-api-access-cb9xn\") pod \"designate-operator-controller-manager-78b4bc895b-cpthv\" (UID: \"9c52b072-b528-4fee-88b8-c878150882b1\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.639673 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.664334 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.665863 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.670185 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7bx8x" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.670566 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.703034 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.703114 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.716875 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmrz7\" (UniqueName: \"kubernetes.io/projected/96d92174-459d-4657-bbbb-a56271877411-kube-api-access-jmrz7\") pod \"horizon-operator-controller-manager-68c6d99b8f-gd76x\" (UID: \"96d92174-459d-4657-bbbb-a56271877411\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.716981 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbshw\" (UniqueName: \"kubernetes.io/projected/19b19877-3b1b-40f9-9501-329bceb4756a-kube-api-access-lbshw\") pod \"cinder-operator-controller-manager-859b6ccc6-fntw7\" (UID: \"19b19877-3b1b-40f9-9501-329bceb4756a\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.717147 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb9xn\" (UniqueName: \"kubernetes.io/projected/9c52b072-b528-4fee-88b8-c878150882b1-kube-api-access-cb9xn\") pod \"designate-operator-controller-manager-78b4bc895b-cpthv\" (UID: \"9c52b072-b528-4fee-88b8-c878150882b1\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.717254 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rww7s\" (UniqueName: \"kubernetes.io/projected/e1f14086-5509-48fe-a88c-c2717009ef93-kube-api-access-rww7s\") pod \"barbican-operator-controller-manager-7d9dfd778-9fvkr\" (UID: \"e1f14086-5509-48fe-a88c-c2717009ef93\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.717341 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkr7k\" (UniqueName: \"kubernetes.io/projected/9564306d-6348-40b4-9e3e-42fcd5778383-kube-api-access-hkr7k\") pod \"heat-operator-controller-manager-5f64f6f8bb-6q6m6\" (UID: \"9564306d-6348-40b4-9e3e-42fcd5778383\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.717457 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp5j6\" (UniqueName: \"kubernetes.io/projected/e88cb01f-84f3-4cdc-9d5d-f283f883868e-kube-api-access-dp5j6\") pod \"glance-operator-controller-manager-668d9c48b9-4bfbh\" (UID: \"e88cb01f-84f3-4cdc-9d5d-f283f883868e\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.733087 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.746081 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qslqw" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.759409 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb9xn\" (UniqueName: \"kubernetes.io/projected/9c52b072-b528-4fee-88b8-c878150882b1-kube-api-access-cb9xn\") pod \"designate-operator-controller-manager-78b4bc895b-cpthv\" (UID: \"9c52b072-b528-4fee-88b8-c878150882b1\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.772230 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rww7s\" (UniqueName: \"kubernetes.io/projected/e1f14086-5509-48fe-a88c-c2717009ef93-kube-api-access-rww7s\") pod \"barbican-operator-controller-manager-7d9dfd778-9fvkr\" (UID: \"e1f14086-5509-48fe-a88c-c2717009ef93\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.803928 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbshw\" (UniqueName: \"kubernetes.io/projected/19b19877-3b1b-40f9-9501-329bceb4756a-kube-api-access-lbshw\") pod \"cinder-operator-controller-manager-859b6ccc6-fntw7\" (UID: \"19b19877-3b1b-40f9-9501-329bceb4756a\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.808620 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.815613 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.823130 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.827808 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.832817 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.835961 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-7mcx4" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.840427 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmrz7\" (UniqueName: \"kubernetes.io/projected/96d92174-459d-4657-bbbb-a56271877411-kube-api-access-jmrz7\") pod \"horizon-operator-controller-manager-68c6d99b8f-gd76x\" (UID: \"96d92174-459d-4657-bbbb-a56271877411\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.840563 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert\") pod \"infra-operator-controller-manager-57548d458d-hcgq6\" (UID: \"7dd39823-94d3-4a96-90e4-ada73223c4b0\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.840621 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfz9h\" (UniqueName: \"kubernetes.io/projected/7dd39823-94d3-4a96-90e4-ada73223c4b0-kube-api-access-xfz9h\") pod \"infra-operator-controller-manager-57548d458d-hcgq6\" (UID: \"7dd39823-94d3-4a96-90e4-ada73223c4b0\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.840752 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkr7k\" (UniqueName: \"kubernetes.io/projected/9564306d-6348-40b4-9e3e-42fcd5778383-kube-api-access-hkr7k\") pod \"heat-operator-controller-manager-5f64f6f8bb-6q6m6\" (UID: \"9564306d-6348-40b4-9e3e-42fcd5778383\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.840812 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp5j6\" (UniqueName: \"kubernetes.io/projected/e88cb01f-84f3-4cdc-9d5d-f283f883868e-kube-api-access-dp5j6\") pod \"glance-operator-controller-manager-668d9c48b9-4bfbh\" (UID: \"e88cb01f-84f3-4cdc-9d5d-f283f883868e\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.840838 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.865808 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.870976 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.874079 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmrz7\" (UniqueName: \"kubernetes.io/projected/96d92174-459d-4657-bbbb-a56271877411-kube-api-access-jmrz7\") pod \"horizon-operator-controller-manager-68c6d99b8f-gd76x\" (UID: \"96d92174-459d-4657-bbbb-a56271877411\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.874230 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jnjlj" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.875483 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkr7k\" (UniqueName: \"kubernetes.io/projected/9564306d-6348-40b4-9e3e-42fcd5778383-kube-api-access-hkr7k\") pod \"heat-operator-controller-manager-5f64f6f8bb-6q6m6\" (UID: \"9564306d-6348-40b4-9e3e-42fcd5778383\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.888176 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp5j6\" (UniqueName: \"kubernetes.io/projected/e88cb01f-84f3-4cdc-9d5d-f283f883868e-kube-api-access-dp5j6\") pod \"glance-operator-controller-manager-668d9c48b9-4bfbh\" (UID: \"e88cb01f-84f3-4cdc-9d5d-f283f883868e\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.891006 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.894389 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.906071 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.914108 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.922292 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.923835 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.934361 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-ltjkh" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.942031 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert\") pod \"infra-operator-controller-manager-57548d458d-hcgq6\" (UID: \"7dd39823-94d3-4a96-90e4-ada73223c4b0\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.942088 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2s62\" (UniqueName: \"kubernetes.io/projected/eefc3c9c-eade-4b6e-8902-6936d481cb1b-kube-api-access-v2s62\") pod \"ironic-operator-controller-manager-6c548fd776-7rjkh\" (UID: \"eefc3c9c-eade-4b6e-8902-6936d481cb1b\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.942110 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfz9h\" (UniqueName: \"kubernetes.io/projected/7dd39823-94d3-4a96-90e4-ada73223c4b0-kube-api-access-xfz9h\") pod \"infra-operator-controller-manager-57548d458d-hcgq6\" (UID: \"7dd39823-94d3-4a96-90e4-ada73223c4b0\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:47:41 crc kubenswrapper[4933]: E1201 09:47:41.944587 4933 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 09:47:41 crc kubenswrapper[4933]: E1201 09:47:41.944715 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert podName:7dd39823-94d3-4a96-90e4-ada73223c4b0 nodeName:}" failed. No retries permitted until 2025-12-01 09:47:42.44468181 +0000 UTC m=+953.086405425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert") pod "infra-operator-controller-manager-57548d458d-hcgq6" (UID: "7dd39823-94d3-4a96-90e4-ada73223c4b0") : secret "infra-operator-webhook-server-cert" not found Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.963231 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.971287 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd"] Dec 01 09:47:41 crc kubenswrapper[4933]: I1201 09:47:41.973382 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.002518 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dk6dw" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.019396 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.036835 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfz9h\" (UniqueName: \"kubernetes.io/projected/7dd39823-94d3-4a96-90e4-ada73223c4b0-kube-api-access-xfz9h\") pod \"infra-operator-controller-manager-57548d458d-hcgq6\" (UID: \"7dd39823-94d3-4a96-90e4-ada73223c4b0\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.043731 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82kvl\" (UniqueName: \"kubernetes.io/projected/b303701b-30bc-4779-b1fa-f574bd6cce65-kube-api-access-82kvl\") pod \"keystone-operator-controller-manager-546d4bdf48-94gt2\" (UID: \"b303701b-30bc-4779-b1fa-f574bd6cce65\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.043781 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s62\" (UniqueName: \"kubernetes.io/projected/eefc3c9c-eade-4b6e-8902-6936d481cb1b-kube-api-access-v2s62\") pod \"ironic-operator-controller-manager-6c548fd776-7rjkh\" (UID: \"eefc3c9c-eade-4b6e-8902-6936d481cb1b\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.043838 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hvk6\" (UniqueName: \"kubernetes.io/projected/e32cc225-71ff-4edf-8e11-ac7abf7afe27-kube-api-access-8hvk6\") pod \"mariadb-operator-controller-manager-56bbcc9d85-n5pnz\" (UID: \"e32cc225-71ff-4edf-8e11-ac7abf7afe27\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.043865 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6fp7\" (UniqueName: \"kubernetes.io/projected/b925c282-ee4d-4b1f-8f18-d3baa2f8faef-kube-api-access-p6fp7\") pod \"manila-operator-controller-manager-6546668bfd-mlmgw\" (UID: \"b925c282-ee4d-4b1f-8f18-d3baa2f8faef\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.059959 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.061472 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.072955 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.073391 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-t978w" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.081761 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2s62\" (UniqueName: \"kubernetes.io/projected/eefc3c9c-eade-4b6e-8902-6936d481cb1b-kube-api-access-v2s62\") pod \"ironic-operator-controller-manager-6c548fd776-7rjkh\" (UID: \"eefc3c9c-eade-4b6e-8902-6936d481cb1b\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.108116 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.110230 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.136944 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-6mr76" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.141998 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.143098 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.146206 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hvk6\" (UniqueName: \"kubernetes.io/projected/e32cc225-71ff-4edf-8e11-ac7abf7afe27-kube-api-access-8hvk6\") pod \"mariadb-operator-controller-manager-56bbcc9d85-n5pnz\" (UID: \"e32cc225-71ff-4edf-8e11-ac7abf7afe27\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.146262 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6fp7\" (UniqueName: \"kubernetes.io/projected/b925c282-ee4d-4b1f-8f18-d3baa2f8faef-kube-api-access-p6fp7\") pod \"manila-operator-controller-manager-6546668bfd-mlmgw\" (UID: \"b925c282-ee4d-4b1f-8f18-d3baa2f8faef\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.146323 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6xgp\" (UniqueName: \"kubernetes.io/projected/c10a734c-970c-42dd-aa15-a27dd68941e1-kube-api-access-k6xgp\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-8rdcd\" (UID: \"c10a734c-970c-42dd-aa15-a27dd68941e1\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.146426 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82kvl\" (UniqueName: \"kubernetes.io/projected/b303701b-30bc-4779-b1fa-f574bd6cce65-kube-api-access-82kvl\") pod \"keystone-operator-controller-manager-546d4bdf48-94gt2\" (UID: \"b303701b-30bc-4779-b1fa-f574bd6cce65\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.194672 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.206053 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.206420 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hvk6\" (UniqueName: \"kubernetes.io/projected/e32cc225-71ff-4edf-8e11-ac7abf7afe27-kube-api-access-8hvk6\") pod \"mariadb-operator-controller-manager-56bbcc9d85-n5pnz\" (UID: \"e32cc225-71ff-4edf-8e11-ac7abf7afe27\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.210210 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-r9vt5" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.216618 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.217433 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82kvl\" (UniqueName: \"kubernetes.io/projected/b303701b-30bc-4779-b1fa-f574bd6cce65-kube-api-access-82kvl\") pod \"keystone-operator-controller-manager-546d4bdf48-94gt2\" (UID: \"b303701b-30bc-4779-b1fa-f574bd6cce65\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.223555 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.228054 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.237382 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.248500 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.249111 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6fp7\" (UniqueName: \"kubernetes.io/projected/b925c282-ee4d-4b1f-8f18-d3baa2f8faef-kube-api-access-p6fp7\") pod \"manila-operator-controller-manager-6546668bfd-mlmgw\" (UID: \"b925c282-ee4d-4b1f-8f18-d3baa2f8faef\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.250114 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vkcx\" (UniqueName: \"kubernetes.io/projected/9a84bd2a-303d-492c-b507-61fa590290d1-kube-api-access-5vkcx\") pod \"nova-operator-controller-manager-697bc559fc-w8tzl\" (UID: \"9a84bd2a-303d-492c-b507-61fa590290d1\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.250188 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6xgp\" (UniqueName: \"kubernetes.io/projected/c10a734c-970c-42dd-aa15-a27dd68941e1-kube-api-access-k6xgp\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-8rdcd\" (UID: \"c10a734c-970c-42dd-aa15-a27dd68941e1\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.250281 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgzmm\" (UniqueName: \"kubernetes.io/projected/a8f52d69-0961-4ac0-b41f-200400bfcf2b-kube-api-access-vgzmm\") pod \"octavia-operator-controller-manager-998648c74-lxkkf\" (UID: \"a8f52d69-0961-4ac0-b41f-200400bfcf2b\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.255118 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-fgs28" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.275518 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.285094 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6xgp\" (UniqueName: \"kubernetes.io/projected/c10a734c-970c-42dd-aa15-a27dd68941e1-kube-api-access-k6xgp\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-8rdcd\" (UID: \"c10a734c-970c-42dd-aa15-a27dd68941e1\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.299405 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-w92f7"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.306153 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.309029 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.309886 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dwbzw" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.326514 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.336374 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.351550 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5q2s\" (UniqueName: \"kubernetes.io/projected/96699ea8-fc44-4dc2-a6f2-f2109d091097-kube-api-access-n5q2s\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd45jrln\" (UID: \"96699ea8-fc44-4dc2-a6f2-f2109d091097\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.351631 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgzmm\" (UniqueName: \"kubernetes.io/projected/a8f52d69-0961-4ac0-b41f-200400bfcf2b-kube-api-access-vgzmm\") pod \"octavia-operator-controller-manager-998648c74-lxkkf\" (UID: \"a8f52d69-0961-4ac0-b41f-200400bfcf2b\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.351698 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vkcx\" (UniqueName: \"kubernetes.io/projected/9a84bd2a-303d-492c-b507-61fa590290d1-kube-api-access-5vkcx\") pod \"nova-operator-controller-manager-697bc559fc-w8tzl\" (UID: \"9a84bd2a-303d-492c-b507-61fa590290d1\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.351728 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd45jrln\" (UID: \"96699ea8-fc44-4dc2-a6f2-f2109d091097\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.351752 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zgml\" (UniqueName: \"kubernetes.io/projected/2550654d-3a84-420e-bcaa-75a2f3c88dec-kube-api-access-4zgml\") pod \"ovn-operator-controller-manager-b6456fdb6-7c9rv\" (UID: \"2550654d-3a84-420e-bcaa-75a2f3c88dec\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.359493 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.361642 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.382477 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgzmm\" (UniqueName: \"kubernetes.io/projected/a8f52d69-0961-4ac0-b41f-200400bfcf2b-kube-api-access-vgzmm\") pod \"octavia-operator-controller-manager-998648c74-lxkkf\" (UID: \"a8f52d69-0961-4ac0-b41f-200400bfcf2b\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.382636 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.392988 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-c9jwz" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.432389 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-w92f7"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.464197 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnpq4\" (UniqueName: \"kubernetes.io/projected/83542dc0-212d-4257-935c-aced954e9157-kube-api-access-nnpq4\") pod \"placement-operator-controller-manager-78f8948974-w92f7\" (UID: \"83542dc0-212d-4257-935c-aced954e9157\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.464265 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd45jrln\" (UID: \"96699ea8-fc44-4dc2-a6f2-f2109d091097\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.469970 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.464300 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zgml\" (UniqueName: \"kubernetes.io/projected/2550654d-3a84-420e-bcaa-75a2f3c88dec-kube-api-access-4zgml\") pod \"ovn-operator-controller-manager-b6456fdb6-7c9rv\" (UID: \"2550654d-3a84-420e-bcaa-75a2f3c88dec\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.470650 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5q2s\" (UniqueName: \"kubernetes.io/projected/96699ea8-fc44-4dc2-a6f2-f2109d091097-kube-api-access-n5q2s\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd45jrln\" (UID: \"96699ea8-fc44-4dc2-a6f2-f2109d091097\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:47:42 crc kubenswrapper[4933]: E1201 09:47:42.470656 4933 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 09:47:42 crc kubenswrapper[4933]: E1201 09:47:42.470832 4933 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 09:47:42 crc kubenswrapper[4933]: E1201 09:47:42.470833 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert podName:96699ea8-fc44-4dc2-a6f2-f2109d091097 nodeName:}" failed. No retries permitted until 2025-12-01 09:47:42.970788324 +0000 UTC m=+953.612511939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" (UID: "96699ea8-fc44-4dc2-a6f2-f2109d091097") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 09:47:42 crc kubenswrapper[4933]: E1201 09:47:42.470901 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert podName:7dd39823-94d3-4a96-90e4-ada73223c4b0 nodeName:}" failed. No retries permitted until 2025-12-01 09:47:43.470878146 +0000 UTC m=+954.112601761 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert") pod "infra-operator-controller-manager-57548d458d-hcgq6" (UID: "7dd39823-94d3-4a96-90e4-ada73223c4b0") : secret "infra-operator-webhook-server-cert" not found Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.470694 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert\") pod \"infra-operator-controller-manager-57548d458d-hcgq6\" (UID: \"7dd39823-94d3-4a96-90e4-ada73223c4b0\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.471241 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvh6c\" (UniqueName: \"kubernetes.io/projected/0bd5ca15-126a-4c31-814b-b0390dc01b3c-kube-api-access-vvh6c\") pod \"swift-operator-controller-manager-5f8c65bbfc-frx4s\" (UID: \"0bd5ca15-126a-4c31-814b-b0390dc01b3c\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.477700 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vkcx\" (UniqueName: \"kubernetes.io/projected/9a84bd2a-303d-492c-b507-61fa590290d1-kube-api-access-5vkcx\") pod \"nova-operator-controller-manager-697bc559fc-w8tzl\" (UID: \"9a84bd2a-303d-492c-b507-61fa590290d1\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.512083 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zgml\" (UniqueName: \"kubernetes.io/projected/2550654d-3a84-420e-bcaa-75a2f3c88dec-kube-api-access-4zgml\") pod \"ovn-operator-controller-manager-b6456fdb6-7c9rv\" (UID: \"2550654d-3a84-420e-bcaa-75a2f3c88dec\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.519515 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5q2s\" (UniqueName: \"kubernetes.io/projected/96699ea8-fc44-4dc2-a6f2-f2109d091097-kube-api-access-n5q2s\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd45jrln\" (UID: \"96699ea8-fc44-4dc2-a6f2-f2109d091097\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.524204 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.535057 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.538797 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-pmspz" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.573937 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnpq4\" (UniqueName: \"kubernetes.io/projected/83542dc0-212d-4257-935c-aced954e9157-kube-api-access-nnpq4\") pod \"placement-operator-controller-manager-78f8948974-w92f7\" (UID: \"83542dc0-212d-4257-935c-aced954e9157\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.574141 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv646\" (UniqueName: \"kubernetes.io/projected/6c192ef8-b774-486f-bb69-d73e8b89989e-kube-api-access-tv646\") pod \"telemetry-operator-controller-manager-76cc84c6bb-b2gcw\" (UID: \"6c192ef8-b774-486f-bb69-d73e8b89989e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.574183 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvh6c\" (UniqueName: \"kubernetes.io/projected/0bd5ca15-126a-4c31-814b-b0390dc01b3c-kube-api-access-vvh6c\") pod \"swift-operator-controller-manager-5f8c65bbfc-frx4s\" (UID: \"0bd5ca15-126a-4c31-814b-b0390dc01b3c\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.598736 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnpq4\" (UniqueName: \"kubernetes.io/projected/83542dc0-212d-4257-935c-aced954e9157-kube-api-access-nnpq4\") pod \"placement-operator-controller-manager-78f8948974-w92f7\" (UID: \"83542dc0-212d-4257-935c-aced954e9157\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.602730 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvh6c\" (UniqueName: \"kubernetes.io/projected/0bd5ca15-126a-4c31-814b-b0390dc01b3c-kube-api-access-vvh6c\") pod \"swift-operator-controller-manager-5f8c65bbfc-frx4s\" (UID: \"0bd5ca15-126a-4c31-814b-b0390dc01b3c\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.602978 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.619857 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.775444 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.782870 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.786816 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-kf9l9" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.796585 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.799515 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.803203 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.803594 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.805393 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv646\" (UniqueName: \"kubernetes.io/projected/6c192ef8-b774-486f-bb69-d73e8b89989e-kube-api-access-tv646\") pod \"telemetry-operator-controller-manager-76cc84c6bb-b2gcw\" (UID: \"6c192ef8-b774-486f-bb69-d73e8b89989e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.806291 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.826397 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.827661 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.832317 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-z9j7k" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.850578 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.880462 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.881977 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.884758 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.885222 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-vmkkv" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.885371 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.890898 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.908229 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w78jl\" (UniqueName: \"kubernetes.io/projected/c807406f-80fb-422b-a68f-e9706da2ac42-kube-api-access-w78jl\") pod \"test-operator-controller-manager-5854674fcc-w9jcs\" (UID: \"c807406f-80fb-422b-a68f-e9706da2ac42\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.908342 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.908379 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.910232 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rtlz\" (UniqueName: \"kubernetes.io/projected/48cfc1f9-dbcb-4ff7-88b7-aa7709648627-kube-api-access-8rtlz\") pod \"watcher-operator-controller-manager-769dc69bc-bmhhw\" (UID: \"48cfc1f9-dbcb-4ff7-88b7-aa7709648627\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.910276 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfwnm\" (UniqueName: \"kubernetes.io/projected/c976f88e-97eb-4223-9475-252505656b6d-kube-api-access-bfwnm\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.915984 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.917516 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.924053 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-dqpg9" Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.927661 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp"] Dec 01 09:47:42 crc kubenswrapper[4933]: I1201 09:47:42.931636 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv646\" (UniqueName: \"kubernetes.io/projected/6c192ef8-b774-486f-bb69-d73e8b89989e-kube-api-access-tv646\") pod \"telemetry-operator-controller-manager-76cc84c6bb-b2gcw\" (UID: \"6c192ef8-b774-486f-bb69-d73e8b89989e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.011994 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w78jl\" (UniqueName: \"kubernetes.io/projected/c807406f-80fb-422b-a68f-e9706da2ac42-kube-api-access-w78jl\") pod \"test-operator-controller-manager-5854674fcc-w9jcs\" (UID: \"c807406f-80fb-422b-a68f-e9706da2ac42\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.012053 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd45jrln\" (UID: \"96699ea8-fc44-4dc2-a6f2-f2109d091097\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.012117 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.012143 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:43 crc kubenswrapper[4933]: E1201 09:47:43.012269 4933 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 09:47:43 crc kubenswrapper[4933]: E1201 09:47:43.012355 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert podName:96699ea8-fc44-4dc2-a6f2-f2109d091097 nodeName:}" failed. No retries permitted until 2025-12-01 09:47:44.012335346 +0000 UTC m=+954.654058961 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" (UID: "96699ea8-fc44-4dc2-a6f2-f2109d091097") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 09:47:43 crc kubenswrapper[4933]: E1201 09:47:43.012586 4933 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 09:47:43 crc kubenswrapper[4933]: E1201 09:47:43.012653 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs podName:c976f88e-97eb-4223-9475-252505656b6d nodeName:}" failed. No retries permitted until 2025-12-01 09:47:43.512632233 +0000 UTC m=+954.154355928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs") pod "openstack-operator-controller-manager-547ff67f67-9fnsd" (UID: "c976f88e-97eb-4223-9475-252505656b6d") : secret "metrics-server-cert" not found Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.012747 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mltrd\" (UniqueName: \"kubernetes.io/projected/3aa898e5-9bf0-4baf-9c71-261229f0baf0-kube-api-access-mltrd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dhlrp\" (UID: \"3aa898e5-9bf0-4baf-9c71-261229f0baf0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp" Dec 01 09:47:43 crc kubenswrapper[4933]: E1201 09:47:43.012800 4933 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.012821 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rtlz\" (UniqueName: \"kubernetes.io/projected/48cfc1f9-dbcb-4ff7-88b7-aa7709648627-kube-api-access-8rtlz\") pod \"watcher-operator-controller-manager-769dc69bc-bmhhw\" (UID: \"48cfc1f9-dbcb-4ff7-88b7-aa7709648627\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw" Dec 01 09:47:43 crc kubenswrapper[4933]: E1201 09:47:43.012839 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs podName:c976f88e-97eb-4223-9475-252505656b6d nodeName:}" failed. No retries permitted until 2025-12-01 09:47:43.512826928 +0000 UTC m=+954.154550643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs") pod "openstack-operator-controller-manager-547ff67f67-9fnsd" (UID: "c976f88e-97eb-4223-9475-252505656b6d") : secret "webhook-server-cert" not found Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.012867 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfwnm\" (UniqueName: \"kubernetes.io/projected/c976f88e-97eb-4223-9475-252505656b6d-kube-api-access-bfwnm\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.030482 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w78jl\" (UniqueName: \"kubernetes.io/projected/c807406f-80fb-422b-a68f-e9706da2ac42-kube-api-access-w78jl\") pod \"test-operator-controller-manager-5854674fcc-w9jcs\" (UID: \"c807406f-80fb-422b-a68f-e9706da2ac42\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.034948 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfwnm\" (UniqueName: \"kubernetes.io/projected/c976f88e-97eb-4223-9475-252505656b6d-kube-api-access-bfwnm\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.044219 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rtlz\" (UniqueName: \"kubernetes.io/projected/48cfc1f9-dbcb-4ff7-88b7-aa7709648627-kube-api-access-8rtlz\") pod \"watcher-operator-controller-manager-769dc69bc-bmhhw\" (UID: \"48cfc1f9-dbcb-4ff7-88b7-aa7709648627\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.052858 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.114264 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mltrd\" (UniqueName: \"kubernetes.io/projected/3aa898e5-9bf0-4baf-9c71-261229f0baf0-kube-api-access-mltrd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dhlrp\" (UID: \"3aa898e5-9bf0-4baf-9c71-261229f0baf0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.138228 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mltrd\" (UniqueName: \"kubernetes.io/projected/3aa898e5-9bf0-4baf-9c71-261229f0baf0-kube-api-access-mltrd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dhlrp\" (UID: \"3aa898e5-9bf0-4baf-9c71-261229f0baf0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.159195 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.172600 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.331676 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.482284 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert\") pod \"infra-operator-controller-manager-57548d458d-hcgq6\" (UID: \"7dd39823-94d3-4a96-90e4-ada73223c4b0\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:47:43 crc kubenswrapper[4933]: E1201 09:47:43.482484 4933 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 09:47:43 crc kubenswrapper[4933]: E1201 09:47:43.482545 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert podName:7dd39823-94d3-4a96-90e4-ada73223c4b0 nodeName:}" failed. No retries permitted until 2025-12-01 09:47:45.482524781 +0000 UTC m=+956.124248396 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert") pod "infra-operator-controller-manager-57548d458d-hcgq6" (UID: "7dd39823-94d3-4a96-90e4-ada73223c4b0") : secret "infra-operator-webhook-server-cert" not found Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.583575 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:43 crc kubenswrapper[4933]: I1201 09:47:43.583638 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:43 crc kubenswrapper[4933]: E1201 09:47:43.584485 4933 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 09:47:43 crc kubenswrapper[4933]: E1201 09:47:43.584510 4933 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 09:47:43 crc kubenswrapper[4933]: E1201 09:47:43.584711 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs podName:c976f88e-97eb-4223-9475-252505656b6d nodeName:}" failed. No retries permitted until 2025-12-01 09:47:44.584511448 +0000 UTC m=+955.226235063 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs") pod "openstack-operator-controller-manager-547ff67f67-9fnsd" (UID: "c976f88e-97eb-4223-9475-252505656b6d") : secret "webhook-server-cert" not found Dec 01 09:47:43 crc kubenswrapper[4933]: E1201 09:47:43.585059 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs podName:c976f88e-97eb-4223-9475-252505656b6d nodeName:}" failed. No retries permitted until 2025-12-01 09:47:44.585049641 +0000 UTC m=+955.226773256 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs") pod "openstack-operator-controller-manager-547ff67f67-9fnsd" (UID: "c976f88e-97eb-4223-9475-252505656b6d") : secret "metrics-server-cert" not found Dec 01 09:47:44 crc kubenswrapper[4933]: I1201 09:47:44.096435 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd45jrln\" (UID: \"96699ea8-fc44-4dc2-a6f2-f2109d091097\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:47:44 crc kubenswrapper[4933]: E1201 09:47:44.097480 4933 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 09:47:44 crc kubenswrapper[4933]: E1201 09:47:44.097620 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert podName:96699ea8-fc44-4dc2-a6f2-f2109d091097 nodeName:}" failed. No retries permitted until 2025-12-01 09:47:46.097601813 +0000 UTC m=+956.739325428 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" (UID: "96699ea8-fc44-4dc2-a6f2-f2109d091097") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 09:47:44 crc kubenswrapper[4933]: I1201 09:47:44.617901 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:44 crc kubenswrapper[4933]: I1201 09:47:44.617969 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:44 crc kubenswrapper[4933]: E1201 09:47:44.618158 4933 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 09:47:44 crc kubenswrapper[4933]: E1201 09:47:44.618204 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs podName:c976f88e-97eb-4223-9475-252505656b6d nodeName:}" failed. No retries permitted until 2025-12-01 09:47:46.618190712 +0000 UTC m=+957.259914327 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs") pod "openstack-operator-controller-manager-547ff67f67-9fnsd" (UID: "c976f88e-97eb-4223-9475-252505656b6d") : secret "webhook-server-cert" not found Dec 01 09:47:44 crc kubenswrapper[4933]: E1201 09:47:44.618607 4933 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 09:47:44 crc kubenswrapper[4933]: E1201 09:47:44.618636 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs podName:c976f88e-97eb-4223-9475-252505656b6d nodeName:}" failed. No retries permitted until 2025-12-01 09:47:46.618626922 +0000 UTC m=+957.260350537 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs") pod "openstack-operator-controller-manager-547ff67f67-9fnsd" (UID: "c976f88e-97eb-4223-9475-252505656b6d") : secret "metrics-server-cert" not found Dec 01 09:47:45 crc kubenswrapper[4933]: I1201 09:47:45.532389 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert\") pod \"infra-operator-controller-manager-57548d458d-hcgq6\" (UID: \"7dd39823-94d3-4a96-90e4-ada73223c4b0\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:47:45 crc kubenswrapper[4933]: E1201 09:47:45.534365 4933 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 09:47:45 crc kubenswrapper[4933]: E1201 09:47:45.534438 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert podName:7dd39823-94d3-4a96-90e4-ada73223c4b0 nodeName:}" failed. No retries permitted until 2025-12-01 09:47:49.534416988 +0000 UTC m=+960.176140603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert") pod "infra-operator-controller-manager-57548d458d-hcgq6" (UID: "7dd39823-94d3-4a96-90e4-ada73223c4b0") : secret "infra-operator-webhook-server-cert" not found Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.082569 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x"] Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.158751 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7"] Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.201157 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd45jrln\" (UID: \"96699ea8-fc44-4dc2-a6f2-f2109d091097\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:47:46 crc kubenswrapper[4933]: E1201 09:47:46.201490 4933 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 09:47:46 crc kubenswrapper[4933]: E1201 09:47:46.201549 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert podName:96699ea8-fc44-4dc2-a6f2-f2109d091097 nodeName:}" failed. No retries permitted until 2025-12-01 09:47:50.201530066 +0000 UTC m=+960.843253681 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" (UID: "96699ea8-fc44-4dc2-a6f2-f2109d091097") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.274408 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv"] Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.324946 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr"] Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.375702 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6"] Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.401591 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh"] Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.431159 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf"] Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.585027 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs"] Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.625856 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.625914 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:46 crc kubenswrapper[4933]: E1201 09:47:46.626138 4933 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 09:47:46 crc kubenswrapper[4933]: E1201 09:47:46.626205 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs podName:c976f88e-97eb-4223-9475-252505656b6d nodeName:}" failed. No retries permitted until 2025-12-01 09:47:50.626182915 +0000 UTC m=+961.267906530 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs") pod "openstack-operator-controller-manager-547ff67f67-9fnsd" (UID: "c976f88e-97eb-4223-9475-252505656b6d") : secret "webhook-server-cert" not found Dec 01 09:47:46 crc kubenswrapper[4933]: E1201 09:47:46.626269 4933 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 09:47:46 crc kubenswrapper[4933]: E1201 09:47:46.626296 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs podName:c976f88e-97eb-4223-9475-252505656b6d nodeName:}" failed. No retries permitted until 2025-12-01 09:47:50.626287407 +0000 UTC m=+961.268011022 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs") pod "openstack-operator-controller-manager-547ff67f67-9fnsd" (UID: "c976f88e-97eb-4223-9475-252505656b6d") : secret "metrics-server-cert" not found Dec 01 09:47:46 crc kubenswrapper[4933]: W1201 09:47:46.671040 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2550654d_3a84_420e_bcaa_75a2f3c88dec.slice/crio-3067be18caa0d897be5e63be13b485c446e382b7a58d48d679caafc8b55243a7 WatchSource:0}: Error finding container 3067be18caa0d897be5e63be13b485c446e382b7a58d48d679caafc8b55243a7: Status 404 returned error can't find the container with id 3067be18caa0d897be5e63be13b485c446e382b7a58d48d679caafc8b55243a7 Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.674499 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv"] Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.890863 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6" event={"ID":"9564306d-6348-40b4-9e3e-42fcd5778383","Type":"ContainerStarted","Data":"d3813ea5072073ec574cde2c81873f272fb3a108e7d49fa15ed55527e40a6bae"} Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.892001 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr" event={"ID":"e1f14086-5509-48fe-a88c-c2717009ef93","Type":"ContainerStarted","Data":"bd4fb97900b8af8c9609b0f532b0e761484cb21f2e151ba4238cf28229d1c6b6"} Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.893390 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv" event={"ID":"9c52b072-b528-4fee-88b8-c878150882b1","Type":"ContainerStarted","Data":"a3fa51f978800cd929ca88e59874d9f343721c6a20a281446af85122b54330cb"} Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.894774 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x" event={"ID":"96d92174-459d-4657-bbbb-a56271877411","Type":"ContainerStarted","Data":"b620f449fa9274d8110788923d44ebc3f0b0ef971b610e50d9abea05c853836d"} Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.896082 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh" event={"ID":"e88cb01f-84f3-4cdc-9d5d-f283f883868e","Type":"ContainerStarted","Data":"201c810f5aa618dcc49ab59610ee23b0621d06e14393f3801a8294e66c7b24e4"} Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.896849 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv" event={"ID":"2550654d-3a84-420e-bcaa-75a2f3c88dec","Type":"ContainerStarted","Data":"3067be18caa0d897be5e63be13b485c446e382b7a58d48d679caafc8b55243a7"} Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.897423 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf" event={"ID":"a8f52d69-0961-4ac0-b41f-200400bfcf2b","Type":"ContainerStarted","Data":"4779ec35d5b100b6f856ddc0bbfbb8d35c01f7e554476c79f37e127e88943045"} Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.898634 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs" event={"ID":"c807406f-80fb-422b-a68f-e9706da2ac42","Type":"ContainerStarted","Data":"e8fe08500beb37a7008a775cc79bf20573ee22dd88cca91aee041b00414385c8"} Dec 01 09:47:46 crc kubenswrapper[4933]: I1201 09:47:46.899931 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7" event={"ID":"19b19877-3b1b-40f9-9501-329bceb4756a","Type":"ContainerStarted","Data":"9c0c9f24c8f7a3961f312bbfabb9952986b8b0e6d37330709c9f776c9d2b5dde"} Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.048154 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2"] Dec 01 09:47:47 crc kubenswrapper[4933]: W1201 09:47:47.056071 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb303701b_30bc_4779_b1fa_f574bd6cce65.slice/crio-7f85c58a0f3ca5b9935412536ecffb4d0c8aed8f5dea979474e4cd5d80d916b4 WatchSource:0}: Error finding container 7f85c58a0f3ca5b9935412536ecffb4d0c8aed8f5dea979474e4cd5d80d916b4: Status 404 returned error can't find the container with id 7f85c58a0f3ca5b9935412536ecffb4d0c8aed8f5dea979474e4cd5d80d916b4 Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.069282 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz"] Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.076874 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw"] Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.086071 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw"] Dec 01 09:47:47 crc kubenswrapper[4933]: W1201 09:47:47.088904 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a84bd2a_303d_492c_b507_61fa590290d1.slice/crio-edc379774f49ea8eede449d0774570b7f72613bbf53cf69ef14fe12680c566e9 WatchSource:0}: Error finding container edc379774f49ea8eede449d0774570b7f72613bbf53cf69ef14fe12680c566e9: Status 404 returned error can't find the container with id edc379774f49ea8eede449d0774570b7f72613bbf53cf69ef14fe12680c566e9 Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.096074 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl"] Dec 01 09:47:47 crc kubenswrapper[4933]: W1201 09:47:47.096647 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48cfc1f9_dbcb_4ff7_88b7_aa7709648627.slice/crio-36d61d914821a19e0b38997f870618e29f5ae01c4b7af0fdc332489a3a866ece WatchSource:0}: Error finding container 36d61d914821a19e0b38997f870618e29f5ae01c4b7af0fdc332489a3a866ece: Status 404 returned error can't find the container with id 36d61d914821a19e0b38997f870618e29f5ae01c4b7af0fdc332489a3a866ece Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.100367 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd"] Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.109289 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh"] Dec 01 09:47:47 crc kubenswrapper[4933]: W1201 09:47:47.118527 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc10a734c_970c_42dd_aa15_a27dd68941e1.slice/crio-5e0250c1ad705483e7b10a4558c6cf703a43bf77b579a67d5a2f023b9b2e104c WatchSource:0}: Error finding container 5e0250c1ad705483e7b10a4558c6cf703a43bf77b579a67d5a2f023b9b2e104c: Status 404 returned error can't find the container with id 5e0250c1ad705483e7b10a4558c6cf703a43bf77b579a67d5a2f023b9b2e104c Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.138280 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k6xgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-8rdcd_openstack-operators(c10a734c-970c-42dd-aa15-a27dd68941e1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.138573 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v2s62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-7rjkh_openstack-operators(eefc3c9c-eade-4b6e-8902-6936d481cb1b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.141126 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s"] Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.141345 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k6xgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-8rdcd_openstack-operators(c10a734c-970c-42dd-aa15-a27dd68941e1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.141432 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v2s62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-7rjkh_openstack-operators(eefc3c9c-eade-4b6e-8902-6936d481cb1b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.141554 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vvh6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-frx4s_openstack-operators(0bd5ca15-126a-4c31-814b-b0390dc01b3c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.142954 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" podUID="eefc3c9c-eade-4b6e-8902-6936d481cb1b" Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.143265 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" podUID="c10a734c-970c-42dd-aa15-a27dd68941e1" Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.144401 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vvh6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-frx4s_openstack-operators(0bd5ca15-126a-4c31-814b-b0390dc01b3c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.146401 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" podUID="0bd5ca15-126a-4c31-814b-b0390dc01b3c" Dec 01 09:47:47 crc kubenswrapper[4933]: W1201 09:47:47.148937 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb925c282_ee4d_4b1f_8f18_d3baa2f8faef.slice/crio-4d4ec3443537d72d1a5ce3301e8712e646e34569a8eef409b56e554ea98bab88 WatchSource:0}: Error finding container 4d4ec3443537d72d1a5ce3301e8712e646e34569a8eef409b56e554ea98bab88: Status 404 returned error can't find the container with id 4d4ec3443537d72d1a5ce3301e8712e646e34569a8eef409b56e554ea98bab88 Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.168791 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mltrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-dhlrp_openstack-operators(3aa898e5-9bf0-4baf-9c71-261229f0baf0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.170022 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp" podUID="3aa898e5-9bf0-4baf-9c71-261229f0baf0" Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.176753 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp"] Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.201688 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw"] Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.203526 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnpq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-w92f7_openstack-operators(83542dc0-212d-4257-935c-aced954e9157): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.205796 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnpq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-w92f7_openstack-operators(83542dc0-212d-4257-935c-aced954e9157): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.207296 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" podUID="83542dc0-212d-4257-935c-aced954e9157" Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.207415 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-w92f7"] Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.911421 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" event={"ID":"c10a734c-970c-42dd-aa15-a27dd68941e1","Type":"ContainerStarted","Data":"5e0250c1ad705483e7b10a4558c6cf703a43bf77b579a67d5a2f023b9b2e104c"} Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.915092 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" podUID="c10a734c-970c-42dd-aa15-a27dd68941e1" Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.915391 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2" event={"ID":"b303701b-30bc-4779-b1fa-f574bd6cce65","Type":"ContainerStarted","Data":"7f85c58a0f3ca5b9935412536ecffb4d0c8aed8f5dea979474e4cd5d80d916b4"} Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.917737 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz" event={"ID":"e32cc225-71ff-4edf-8e11-ac7abf7afe27","Type":"ContainerStarted","Data":"3d68394bafbbd4aa87d0c8a7243e812a648320d0020b48c2511761ef9c4f42ca"} Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.920167 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl" event={"ID":"9a84bd2a-303d-492c-b507-61fa590290d1","Type":"ContainerStarted","Data":"edc379774f49ea8eede449d0774570b7f72613bbf53cf69ef14fe12680c566e9"} Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.931625 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw" event={"ID":"b925c282-ee4d-4b1f-8f18-d3baa2f8faef","Type":"ContainerStarted","Data":"4d4ec3443537d72d1a5ce3301e8712e646e34569a8eef409b56e554ea98bab88"} Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.938897 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" event={"ID":"0bd5ca15-126a-4c31-814b-b0390dc01b3c","Type":"ContainerStarted","Data":"71aca4f5d672e78efeaccb3e71be20c4f0d4bd3b446935cec7471722c6f378d3"} Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.941101 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" podUID="0bd5ca15-126a-4c31-814b-b0390dc01b3c" Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.945122 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw" event={"ID":"48cfc1f9-dbcb-4ff7-88b7-aa7709648627","Type":"ContainerStarted","Data":"36d61d914821a19e0b38997f870618e29f5ae01c4b7af0fdc332489a3a866ece"} Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.981633 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp" event={"ID":"3aa898e5-9bf0-4baf-9c71-261229f0baf0","Type":"ContainerStarted","Data":"90ed8354aa80ae480745ca1d4f501ba714e40617aefee69488d90b81184c4d31"} Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.983293 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp" podUID="3aa898e5-9bf0-4baf-9c71-261229f0baf0" Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.985050 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" event={"ID":"eefc3c9c-eade-4b6e-8902-6936d481cb1b","Type":"ContainerStarted","Data":"05c883b370f2ab75484b8d39c2a5845fee745a9cf093473012ffdd556f0aeca7"} Dec 01 09:47:47 crc kubenswrapper[4933]: E1201 09:47:47.992732 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" podUID="eefc3c9c-eade-4b6e-8902-6936d481cb1b" Dec 01 09:47:47 crc kubenswrapper[4933]: I1201 09:47:47.993630 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw" event={"ID":"6c192ef8-b774-486f-bb69-d73e8b89989e","Type":"ContainerStarted","Data":"c773e6d7c9055dedd79ba1ecb7b395a7074d0b64be77ad1ebde2e53b7c98cccc"} Dec 01 09:47:48 crc kubenswrapper[4933]: I1201 09:47:48.007116 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" event={"ID":"83542dc0-212d-4257-935c-aced954e9157","Type":"ContainerStarted","Data":"b91ff5dc34032d50639049947fd272f3123b37cad33056afe21d5efeff80fd1d"} Dec 01 09:47:48 crc kubenswrapper[4933]: E1201 09:47:48.011514 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" podUID="83542dc0-212d-4257-935c-aced954e9157" Dec 01 09:47:49 crc kubenswrapper[4933]: E1201 09:47:49.060668 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp" podUID="3aa898e5-9bf0-4baf-9c71-261229f0baf0" Dec 01 09:47:49 crc kubenswrapper[4933]: E1201 09:47:49.071428 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" podUID="83542dc0-212d-4257-935c-aced954e9157" Dec 01 09:47:49 crc kubenswrapper[4933]: E1201 09:47:49.071499 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" podUID="c10a734c-970c-42dd-aa15-a27dd68941e1" Dec 01 09:47:49 crc kubenswrapper[4933]: E1201 09:47:49.071807 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" podUID="0bd5ca15-126a-4c31-814b-b0390dc01b3c" Dec 01 09:47:49 crc kubenswrapper[4933]: E1201 09:47:49.072119 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" podUID="eefc3c9c-eade-4b6e-8902-6936d481cb1b" Dec 01 09:47:49 crc kubenswrapper[4933]: I1201 09:47:49.552408 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert\") pod \"infra-operator-controller-manager-57548d458d-hcgq6\" (UID: \"7dd39823-94d3-4a96-90e4-ada73223c4b0\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:47:49 crc kubenswrapper[4933]: E1201 09:47:49.553072 4933 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 09:47:49 crc kubenswrapper[4933]: E1201 09:47:49.553149 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert podName:7dd39823-94d3-4a96-90e4-ada73223c4b0 nodeName:}" failed. No retries permitted until 2025-12-01 09:47:57.553128543 +0000 UTC m=+968.194852158 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert") pod "infra-operator-controller-manager-57548d458d-hcgq6" (UID: "7dd39823-94d3-4a96-90e4-ada73223c4b0") : secret "infra-operator-webhook-server-cert" not found Dec 01 09:47:50 crc kubenswrapper[4933]: I1201 09:47:50.220113 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd45jrln\" (UID: \"96699ea8-fc44-4dc2-a6f2-f2109d091097\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:47:50 crc kubenswrapper[4933]: E1201 09:47:50.220374 4933 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 09:47:50 crc kubenswrapper[4933]: E1201 09:47:50.220425 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert podName:96699ea8-fc44-4dc2-a6f2-f2109d091097 nodeName:}" failed. No retries permitted until 2025-12-01 09:47:58.220410173 +0000 UTC m=+968.862133788 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" (UID: "96699ea8-fc44-4dc2-a6f2-f2109d091097") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 09:47:50 crc kubenswrapper[4933]: I1201 09:47:50.633886 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:50 crc kubenswrapper[4933]: I1201 09:47:50.633961 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:50 crc kubenswrapper[4933]: E1201 09:47:50.634145 4933 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 09:47:50 crc kubenswrapper[4933]: E1201 09:47:50.634197 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs podName:c976f88e-97eb-4223-9475-252505656b6d nodeName:}" failed. No retries permitted until 2025-12-01 09:47:58.634182686 +0000 UTC m=+969.275906301 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs") pod "openstack-operator-controller-manager-547ff67f67-9fnsd" (UID: "c976f88e-97eb-4223-9475-252505656b6d") : secret "webhook-server-cert" not found Dec 01 09:47:50 crc kubenswrapper[4933]: E1201 09:47:50.634212 4933 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 09:47:50 crc kubenswrapper[4933]: E1201 09:47:50.634291 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs podName:c976f88e-97eb-4223-9475-252505656b6d nodeName:}" failed. No retries permitted until 2025-12-01 09:47:58.634270458 +0000 UTC m=+969.275994153 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs") pod "openstack-operator-controller-manager-547ff67f67-9fnsd" (UID: "c976f88e-97eb-4223-9475-252505656b6d") : secret "metrics-server-cert" not found Dec 01 09:47:57 crc kubenswrapper[4933]: I1201 09:47:57.614514 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert\") pod \"infra-operator-controller-manager-57548d458d-hcgq6\" (UID: \"7dd39823-94d3-4a96-90e4-ada73223c4b0\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:47:57 crc kubenswrapper[4933]: I1201 09:47:57.631171 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd39823-94d3-4a96-90e4-ada73223c4b0-cert\") pod \"infra-operator-controller-manager-57548d458d-hcgq6\" (UID: \"7dd39823-94d3-4a96-90e4-ada73223c4b0\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:47:57 crc kubenswrapper[4933]: I1201 09:47:57.905172 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7bx8x" Dec 01 09:47:57 crc kubenswrapper[4933]: I1201 09:47:57.911286 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:47:58 crc kubenswrapper[4933]: I1201 09:47:58.225269 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd45jrln\" (UID: \"96699ea8-fc44-4dc2-a6f2-f2109d091097\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:47:58 crc kubenswrapper[4933]: I1201 09:47:58.230361 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96699ea8-fc44-4dc2-a6f2-f2109d091097-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd45jrln\" (UID: \"96699ea8-fc44-4dc2-a6f2-f2109d091097\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:47:58 crc kubenswrapper[4933]: I1201 09:47:58.479065 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-r9vt5" Dec 01 09:47:58 crc kubenswrapper[4933]: I1201 09:47:58.487192 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:47:58 crc kubenswrapper[4933]: I1201 09:47:58.733761 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:58 crc kubenswrapper[4933]: I1201 09:47:58.733877 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:47:58 crc kubenswrapper[4933]: E1201 09:47:58.734091 4933 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 09:47:58 crc kubenswrapper[4933]: E1201 09:47:58.734175 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs podName:c976f88e-97eb-4223-9475-252505656b6d nodeName:}" failed. No retries permitted until 2025-12-01 09:48:14.734153144 +0000 UTC m=+985.375876759 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs") pod "openstack-operator-controller-manager-547ff67f67-9fnsd" (UID: "c976f88e-97eb-4223-9475-252505656b6d") : secret "metrics-server-cert" not found Dec 01 09:47:58 crc kubenswrapper[4933]: E1201 09:47:58.734876 4933 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 09:47:58 crc kubenswrapper[4933]: E1201 09:47:58.734916 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs podName:c976f88e-97eb-4223-9475-252505656b6d nodeName:}" failed. No retries permitted until 2025-12-01 09:48:14.734902252 +0000 UTC m=+985.376626057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs") pod "openstack-operator-controller-manager-547ff67f67-9fnsd" (UID: "c976f88e-97eb-4223-9475-252505656b6d") : secret "webhook-server-cert" not found Dec 01 09:48:01 crc kubenswrapper[4933]: E1201 09:48:01.931401 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Dec 01 09:48:01 crc kubenswrapper[4933]: E1201 09:48:01.932063 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rww7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-9fvkr_openstack-operators(e1f14086-5509-48fe-a88c-c2717009ef93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:03 crc kubenswrapper[4933]: E1201 09:48:03.428682 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5" Dec 01 09:48:03 crc kubenswrapper[4933]: E1201 09:48:03.429368 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6fp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6546668bfd-mlmgw_openstack-operators(b925c282-ee4d-4b1f-8f18-d3baa2f8faef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:04 crc kubenswrapper[4933]: E1201 09:48:04.350343 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:440cde33d3a2a0c545cd1c110a3634eb85544370f448865b97a13c38034b0172" Dec 01 09:48:04 crc kubenswrapper[4933]: E1201 09:48:04.350609 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:440cde33d3a2a0c545cd1c110a3634eb85544370f448865b97a13c38034b0172,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dp5j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-668d9c48b9-4bfbh_openstack-operators(e88cb01f-84f3-4cdc-9d5d-f283f883868e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:05 crc kubenswrapper[4933]: E1201 09:48:05.456753 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" Dec 01 09:48:05 crc kubenswrapper[4933]: E1201 09:48:05.457004 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tv646,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-b2gcw_openstack-operators(6c192ef8-b774-486f-bb69-d73e8b89989e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:06 crc kubenswrapper[4933]: E1201 09:48:06.559913 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" Dec 01 09:48:06 crc kubenswrapper[4933]: E1201 09:48:06.560618 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8rtlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-bmhhw_openstack-operators(48cfc1f9-dbcb-4ff7-88b7-aa7709648627): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:08 crc kubenswrapper[4933]: E1201 09:48:08.546038 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 01 09:48:08 crc kubenswrapper[4933]: E1201 09:48:08.546257 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5vkcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-w8tzl_openstack-operators(9a84bd2a-303d-492c-b507-61fa590290d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:08 crc kubenswrapper[4933]: I1201 09:48:08.549163 4933 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 09:48:10 crc kubenswrapper[4933]: E1201 09:48:10.219491 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 01 09:48:10 crc kubenswrapper[4933]: E1201 09:48:10.220186 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4zgml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-7c9rv_openstack-operators(2550654d-3a84-420e-bcaa-75a2f3c88dec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:11 crc kubenswrapper[4933]: E1201 09:48:11.044925 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 01 09:48:11 crc kubenswrapper[4933]: E1201 09:48:11.045424 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w78jl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-w9jcs_openstack-operators(c807406f-80fb-422b-a68f-e9706da2ac42): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:11 crc kubenswrapper[4933]: I1201 09:48:11.741175 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:48:11 crc kubenswrapper[4933]: I1201 09:48:11.743098 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:48:11 crc kubenswrapper[4933]: E1201 09:48:11.925916 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" Dec 01 09:48:11 crc kubenswrapper[4933]: E1201 09:48:11.926346 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8hvk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-n5pnz_openstack-operators(e32cc225-71ff-4edf-8e11-ac7abf7afe27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:14 crc kubenswrapper[4933]: E1201 09:48:14.512985 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3" Dec 01 09:48:14 crc kubenswrapper[4933]: E1201 09:48:14.513636 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-82kvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-546d4bdf48-94gt2_openstack-operators(b303701b-30bc-4779-b1fa-f574bd6cce65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:14 crc kubenswrapper[4933]: I1201 09:48:14.808392 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:48:14 crc kubenswrapper[4933]: I1201 09:48:14.808997 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:48:14 crc kubenswrapper[4933]: I1201 09:48:14.822107 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-metrics-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:48:14 crc kubenswrapper[4933]: I1201 09:48:14.825461 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c976f88e-97eb-4223-9475-252505656b6d-webhook-certs\") pod \"openstack-operator-controller-manager-547ff67f67-9fnsd\" (UID: \"c976f88e-97eb-4223-9475-252505656b6d\") " pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:48:14 crc kubenswrapper[4933]: I1201 09:48:14.916221 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-vmkkv" Dec 01 09:48:14 crc kubenswrapper[4933]: I1201 09:48:14.923575 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:48:18 crc kubenswrapper[4933]: E1201 09:48:18.051639 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 01 09:48:18 crc kubenswrapper[4933]: E1201 09:48:18.052997 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mltrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-dhlrp_openstack-operators(3aa898e5-9bf0-4baf-9c71-261229f0baf0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:18 crc kubenswrapper[4933]: E1201 09:48:18.055168 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp" podUID="3aa898e5-9bf0-4baf-9c71-261229f0baf0" Dec 01 09:48:18 crc kubenswrapper[4933]: I1201 09:48:18.587405 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6"] Dec 01 09:48:18 crc kubenswrapper[4933]: I1201 09:48:18.610593 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln"] Dec 01 09:48:19 crc kubenswrapper[4933]: W1201 09:48:19.249871 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dd39823_94d3_4a96_90e4_ada73223c4b0.slice/crio-cb87408aa666eaec5e8ba3103d3bf2b48e47111a8c65264a48e2c26628234ca6 WatchSource:0}: Error finding container cb87408aa666eaec5e8ba3103d3bf2b48e47111a8c65264a48e2c26628234ca6: Status 404 returned error can't find the container with id cb87408aa666eaec5e8ba3103d3bf2b48e47111a8c65264a48e2c26628234ca6 Dec 01 09:48:19 crc kubenswrapper[4933]: W1201 09:48:19.252181 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96699ea8_fc44_4dc2_a6f2_f2109d091097.slice/crio-cc22e8a43b9966b9ec9925c659588c1ccc865eb136b54861c08539932f64d538 WatchSource:0}: Error finding container cc22e8a43b9966b9ec9925c659588c1ccc865eb136b54861c08539932f64d538: Status 404 returned error can't find the container with id cc22e8a43b9966b9ec9925c659588c1ccc865eb136b54861c08539932f64d538 Dec 01 09:48:19 crc kubenswrapper[4933]: I1201 09:48:19.461622 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" event={"ID":"96699ea8-fc44-4dc2-a6f2-f2109d091097","Type":"ContainerStarted","Data":"cc22e8a43b9966b9ec9925c659588c1ccc865eb136b54861c08539932f64d538"} Dec 01 09:48:19 crc kubenswrapper[4933]: I1201 09:48:19.462753 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" event={"ID":"7dd39823-94d3-4a96-90e4-ada73223c4b0","Type":"ContainerStarted","Data":"cb87408aa666eaec5e8ba3103d3bf2b48e47111a8c65264a48e2c26628234ca6"} Dec 01 09:48:19 crc kubenswrapper[4933]: I1201 09:48:19.991649 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd"] Dec 01 09:48:20 crc kubenswrapper[4933]: I1201 09:48:20.502067 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf" event={"ID":"a8f52d69-0961-4ac0-b41f-200400bfcf2b","Type":"ContainerStarted","Data":"8221c846f7af4b6c9fd234f0a2f7162187f0fd5a40b2134d2beae386b2b143b6"} Dec 01 09:48:20 crc kubenswrapper[4933]: I1201 09:48:20.508573 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x" event={"ID":"96d92174-459d-4657-bbbb-a56271877411","Type":"ContainerStarted","Data":"c21ffabd014cca43c479cc6c0185d81f3f3bd6a5deab3e90b3f5d618feb099b8"} Dec 01 09:48:20 crc kubenswrapper[4933]: I1201 09:48:20.520055 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" event={"ID":"83542dc0-212d-4257-935c-aced954e9157","Type":"ContainerStarted","Data":"184ad2fdcc9ffe190bb8a021caf33c8613aaa05fcfc269daa526535f791dd064"} Dec 01 09:48:20 crc kubenswrapper[4933]: I1201 09:48:20.531258 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" event={"ID":"c10a734c-970c-42dd-aa15-a27dd68941e1","Type":"ContainerStarted","Data":"41cbe77da17fe38ff0a11b2f0da241353f86f13f35f2d254209d504e926885cd"} Dec 01 09:48:20 crc kubenswrapper[4933]: I1201 09:48:20.537192 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv" event={"ID":"9c52b072-b528-4fee-88b8-c878150882b1","Type":"ContainerStarted","Data":"ed0dd20f92ea413e466921f6fe6079c72f944896ad59be379e533ef484c846fa"} Dec 01 09:48:20 crc kubenswrapper[4933]: I1201 09:48:20.538859 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7" event={"ID":"19b19877-3b1b-40f9-9501-329bceb4756a","Type":"ContainerStarted","Data":"7ee85baf53ad70d1139b2b15ee8f7a5cc3cb6c8f0d2a2a51156ea4ef269e84b4"} Dec 01 09:48:20 crc kubenswrapper[4933]: I1201 09:48:20.539888 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" event={"ID":"c976f88e-97eb-4223-9475-252505656b6d","Type":"ContainerStarted","Data":"e291c473b447c3d91dd7ba2485beafd75fe7ec92fac6e3b1c523bd8ee5f5827f"} Dec 01 09:48:20 crc kubenswrapper[4933]: I1201 09:48:20.701472 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6" event={"ID":"9564306d-6348-40b4-9e3e-42fcd5778383","Type":"ContainerStarted","Data":"f2a451644a6ac2e730da3ae12654436948c05e6b28e5a8a37f19e1fbe657fab0"} Dec 01 09:48:21 crc kubenswrapper[4933]: I1201 09:48:21.784155 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" event={"ID":"0bd5ca15-126a-4c31-814b-b0390dc01b3c","Type":"ContainerStarted","Data":"566b1ff6a00bd50aa0d58527419525334e15f26de628a856fffb2768413eacc5"} Dec 01 09:48:30 crc kubenswrapper[4933]: E1201 09:48:30.669867 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp" podUID="3aa898e5-9bf0-4baf-9c71-261229f0baf0" Dec 01 09:48:31 crc kubenswrapper[4933]: E1201 09:48:31.169654 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:31 crc kubenswrapper[4933]: E1201 09:48:31.169853 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w78jl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-w9jcs_openstack-operators(c807406f-80fb-422b-a68f-e9706da2ac42): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:31 crc kubenswrapper[4933]: E1201 09:48:31.171191 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs" podUID="c807406f-80fb-422b-a68f-e9706da2ac42" Dec 01 09:48:34 crc kubenswrapper[4933]: E1201 09:48:34.485596 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:34 crc kubenswrapper[4933]: E1201 09:48:34.486860 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8rtlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-bmhhw_openstack-operators(48cfc1f9-dbcb-4ff7-88b7-aa7709648627): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:34 crc kubenswrapper[4933]: E1201 09:48:34.488033 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw" podUID="48cfc1f9-dbcb-4ff7-88b7-aa7709648627" Dec 01 09:48:34 crc kubenswrapper[4933]: E1201 09:48:34.489839 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:34 crc kubenswrapper[4933]: E1201 09:48:34.490045 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6fp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6546668bfd-mlmgw_openstack-operators(b925c282-ee4d-4b1f-8f18-d3baa2f8faef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:34 crc kubenswrapper[4933]: E1201 09:48:34.490881 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:34 crc kubenswrapper[4933]: E1201 09:48:34.491075 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dp5j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-668d9c48b9-4bfbh_openstack-operators(e88cb01f-84f3-4cdc-9d5d-f283f883868e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:34 crc kubenswrapper[4933]: E1201 09:48:34.491202 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw" podUID="b925c282-ee4d-4b1f-8f18-d3baa2f8faef" Dec 01 09:48:34 crc kubenswrapper[4933]: E1201 09:48:34.493071 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh" podUID="e88cb01f-84f3-4cdc-9d5d-f283f883868e" Dec 01 09:48:34 crc kubenswrapper[4933]: E1201 09:48:34.981333 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:34 crc kubenswrapper[4933]: E1201 09:48:34.981511 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rww7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-9fvkr_openstack-operators(e1f14086-5509-48fe-a88c-c2717009ef93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:34 crc kubenswrapper[4933]: E1201 09:48:34.983052 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr" podUID="e1f14086-5509-48fe-a88c-c2717009ef93" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.553381 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.554039 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfz9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-57548d458d-hcgq6_openstack-operators(7dd39823-94d3-4a96-90e4-ada73223c4b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.565562 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.565745 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vvh6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-frx4s_openstack-operators(0bd5ca15-126a-4c31-814b-b0390dc01b3c): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.565829 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.565909 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbshw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-fntw7_openstack-operators(19b19877-3b1b-40f9-9501-329bceb4756a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.566728 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.566821 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k6xgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-8rdcd_openstack-operators(c10a734c-970c-42dd-aa15-a27dd68941e1): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.567086 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7" podUID="19b19877-3b1b-40f9-9501-329bceb4756a" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.567101 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.567143 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" podUID="0bd5ca15-126a-4c31-814b-b0390dc01b3c" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.567267 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vgzmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-lxkkf_openstack-operators(a8f52d69-0961-4ac0-b41f-200400bfcf2b): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.567297 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.567425 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hkr7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-6q6m6_openstack-operators(9564306d-6348-40b4-9e3e-42fcd5778383): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.568163 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" podUID="c10a734c-970c-42dd-aa15-a27dd68941e1" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.569257 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.569687 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnpq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-w92f7_openstack-operators(83542dc0-212d-4257-935c-aced954e9157): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.569757 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.569851 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jmrz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-gd76x_openstack-operators(96d92174-459d-4657-bbbb-a56271877411): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.570779 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" podUID="83542dc0-212d-4257-935c-aced954e9157" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.570927 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x" podUID="96d92174-459d-4657-bbbb-a56271877411" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.570937 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6" podUID="9564306d-6348-40b4-9e3e-42fcd5778383" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.570973 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf" podUID="a8f52d69-0961-4ac0-b41f-200400bfcf2b" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.572406 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.572527 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cb9xn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-cpthv_openstack-operators(9c52b072-b528-4fee-88b8-c878150882b1): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.573673 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv" podUID="9c52b072-b528-4fee-88b8-c878150882b1" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.634466 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.634738 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tv646,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-b2gcw_openstack-operators(6c192ef8-b774-486f-bb69-d73e8b89989e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.635862 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw" podUID="6c192ef8-b774-486f-bb69-d73e8b89989e" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.743459 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.744090 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5q2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-64bc77cfd45jrln_openstack-operators(96699ea8-fc44-4dc2-a6f2-f2109d091097): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:48:37 crc kubenswrapper[4933]: I1201 09:48:37.916336 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" event={"ID":"eefc3c9c-eade-4b6e-8902-6936d481cb1b","Type":"ContainerStarted","Data":"a7e971fde9f83c9366f035d56e4af6c1d942233d54c58c9b33b095791db3c3a7"} Dec 01 09:48:37 crc kubenswrapper[4933]: I1201 09:48:37.919752 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" event={"ID":"c976f88e-97eb-4223-9475-252505656b6d","Type":"ContainerStarted","Data":"6208602251d0d868577591d5cfb829d59de2ecb9c781653f3cb8bd560abb9478"} Dec 01 09:48:37 crc kubenswrapper[4933]: I1201 09:48:37.921545 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv" Dec 01 09:48:37 crc kubenswrapper[4933]: I1201 09:48:37.921578 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf" Dec 01 09:48:37 crc kubenswrapper[4933]: I1201 09:48:37.921591 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:48:37 crc kubenswrapper[4933]: I1201 09:48:37.925523 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv" Dec 01 09:48:37 crc kubenswrapper[4933]: I1201 09:48:37.926519 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf" Dec 01 09:48:37 crc kubenswrapper[4933]: E1201 09:48:37.993751 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv" podUID="2550654d-3a84-420e-bcaa-75a2f3c88dec" Dec 01 09:48:38 crc kubenswrapper[4933]: I1201 09:48:38.000164 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" podStartSLOduration=56.000125329 podStartE2EDuration="56.000125329s" podCreationTimestamp="2025-12-01 09:47:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:48:37.991284263 +0000 UTC m=+1008.633007888" watchObservedRunningTime="2025-12-01 09:48:38.000125329 +0000 UTC m=+1008.641848944" Dec 01 09:48:38 crc kubenswrapper[4933]: E1201 09:48:38.135793 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2" podUID="b303701b-30bc-4779-b1fa-f574bd6cce65" Dec 01 09:48:38 crc kubenswrapper[4933]: E1201 09:48:38.158553 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl" podUID="9a84bd2a-303d-492c-b507-61fa590290d1" Dec 01 09:48:38 crc kubenswrapper[4933]: E1201 09:48:38.181565 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" podUID="96699ea8-fc44-4dc2-a6f2-f2109d091097" Dec 01 09:48:38 crc kubenswrapper[4933]: I1201 09:48:38.928954 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw" event={"ID":"b925c282-ee4d-4b1f-8f18-d3baa2f8faef","Type":"ContainerStarted","Data":"e21346ebacb65aee721ec77782e667aaefe4375a3321b9eb75bcd57e06784e5e"} Dec 01 09:48:38 crc kubenswrapper[4933]: I1201 09:48:38.930408 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv" event={"ID":"2550654d-3a84-420e-bcaa-75a2f3c88dec","Type":"ContainerStarted","Data":"6503ed2affe7449e196510d2ebb13feb07a7276e8ada5d12423e5eaeffb6e76e"} Dec 01 09:48:38 crc kubenswrapper[4933]: I1201 09:48:38.931898 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh" event={"ID":"e88cb01f-84f3-4cdc-9d5d-f283f883868e","Type":"ContainerStarted","Data":"4bd42c84cc9de37fdd4ab2caa075a6fd7832127bf5a97cd66a7806e2b89a995d"} Dec 01 09:48:38 crc kubenswrapper[4933]: I1201 09:48:38.935621 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw" event={"ID":"48cfc1f9-dbcb-4ff7-88b7-aa7709648627","Type":"ContainerStarted","Data":"b998c3287b9ecc0eedae9f043fb78006fb82301360a3283619e44edd96d25a97"} Dec 01 09:48:38 crc kubenswrapper[4933]: I1201 09:48:38.938289 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" event={"ID":"96699ea8-fc44-4dc2-a6f2-f2109d091097","Type":"ContainerStarted","Data":"b8bae43b21fbb89d9d466f7b547deaa0d4757a193e4b9c697d9b0a03486d9c1e"} Dec 01 09:48:38 crc kubenswrapper[4933]: E1201 09:48:38.940683 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" podUID="96699ea8-fc44-4dc2-a6f2-f2109d091097" Dec 01 09:48:38 crc kubenswrapper[4933]: I1201 09:48:38.941490 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl" event={"ID":"9a84bd2a-303d-492c-b507-61fa590290d1","Type":"ContainerStarted","Data":"f8e2cd73b89c7045ca20bb1c98ca9fe3ffb3bcfa86c0d4b8053686f25cfc60af"} Dec 01 09:48:38 crc kubenswrapper[4933]: I1201 09:48:38.943979 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs" event={"ID":"c807406f-80fb-422b-a68f-e9706da2ac42","Type":"ContainerStarted","Data":"dc7fe8ac02734e5cb5f194ef2dd6453342d58b5fed30b4e7e62385929117f011"} Dec 01 09:48:38 crc kubenswrapper[4933]: I1201 09:48:38.948506 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr" event={"ID":"e1f14086-5509-48fe-a88c-c2717009ef93","Type":"ContainerStarted","Data":"8c6369ec3aad4144830e70a25d1e5c8b9c9d877a918685c349263d26311834b4"} Dec 01 09:48:38 crc kubenswrapper[4933]: I1201 09:48:38.956387 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2" event={"ID":"b303701b-30bc-4779-b1fa-f574bd6cce65","Type":"ContainerStarted","Data":"b81a1d95546d1ebda51b9aba8b193f739732a295a03c76e9832993ddac310fdb"} Dec 01 09:48:39 crc kubenswrapper[4933]: E1201 09:48:39.375360 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz" podUID="e32cc225-71ff-4edf-8e11-ac7abf7afe27" Dec 01 09:48:39 crc kubenswrapper[4933]: E1201 09:48:39.839297 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" podUID="7dd39823-94d3-4a96-90e4-ada73223c4b0" Dec 01 09:48:39 crc kubenswrapper[4933]: I1201 09:48:39.994452 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7" event={"ID":"19b19877-3b1b-40f9-9501-329bceb4756a","Type":"ContainerStarted","Data":"11ce9361ebf2de580f80c06926011fe48d03b264dd187764efcc55b2c2db39a8"} Dec 01 09:48:39 crc kubenswrapper[4933]: I1201 09:48:39.997732 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.005353 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.089844 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fntw7" podStartSLOduration=30.77458891 podStartE2EDuration="59.089826814s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:46.173917879 +0000 UTC m=+956.815641494" lastFinishedPulling="2025-12-01 09:48:14.489155783 +0000 UTC m=+985.130879398" observedRunningTime="2025-12-01 09:48:40.083369015 +0000 UTC m=+1010.725092630" watchObservedRunningTime="2025-12-01 09:48:40.089826814 +0000 UTC m=+1010.731550429" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.092829 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6" event={"ID":"9564306d-6348-40b4-9e3e-42fcd5778383","Type":"ContainerStarted","Data":"04b902a7af817943350a45b13dfe2503a3f63fe232d41cca68b80716f2a6495a"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.094088 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.108797 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x" event={"ID":"96d92174-459d-4657-bbbb-a56271877411","Type":"ContainerStarted","Data":"1b557c58ec58df743761298435ba09380c43aa827bf429ceb542a904119570d9"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.110201 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.124901 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw" event={"ID":"b925c282-ee4d-4b1f-8f18-d3baa2f8faef","Type":"ContainerStarted","Data":"471347b69cbc8775f7baec8c8c7c836cfb456a171e793bc89b4ab8e82d3d46bb"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.126119 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.135668 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf" event={"ID":"a8f52d69-0961-4ac0-b41f-200400bfcf2b","Type":"ContainerStarted","Data":"7fc7dff1835e1cd5fccd7f861aca6f8110654fe7feee7fe21f24aec10d109088"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.140844 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6" podStartSLOduration=30.976262749 podStartE2EDuration="59.140822062s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:46.324573749 +0000 UTC m=+956.966297364" lastFinishedPulling="2025-12-01 09:48:14.489133062 +0000 UTC m=+985.130856677" observedRunningTime="2025-12-01 09:48:40.138351161 +0000 UTC m=+1010.780074776" watchObservedRunningTime="2025-12-01 09:48:40.140822062 +0000 UTC m=+1010.782545677" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.158561 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv" event={"ID":"9c52b072-b528-4fee-88b8-c878150882b1","Type":"ContainerStarted","Data":"effe0e771f80397d12eadb388180e613cefa98690fbb3259c4eb038796ebe939"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.191603 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.192540 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6q6m6" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.212590 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" event={"ID":"eefc3c9c-eade-4b6e-8902-6936d481cb1b","Type":"ContainerStarted","Data":"1caf870fd784651a73fa2fb7e0fa4a6ba7a4876042426596b2e37ab07541c20c"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.213356 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.221995 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw" podStartSLOduration=8.588669523 podStartE2EDuration="59.221974859s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:47.168460745 +0000 UTC m=+957.810184360" lastFinishedPulling="2025-12-01 09:48:37.801766081 +0000 UTC m=+1008.443489696" observedRunningTime="2025-12-01 09:48:40.21793891 +0000 UTC m=+1010.859662515" watchObservedRunningTime="2025-12-01 09:48:40.221974859 +0000 UTC m=+1010.863698474" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.231578 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gd76x" podStartSLOduration=30.89382718 podStartE2EDuration="59.231554924s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:46.152563997 +0000 UTC m=+956.794287612" lastFinishedPulling="2025-12-01 09:48:14.490291741 +0000 UTC m=+985.132015356" observedRunningTime="2025-12-01 09:48:40.193645486 +0000 UTC m=+1010.835369091" watchObservedRunningTime="2025-12-01 09:48:40.231554924 +0000 UTC m=+1010.873278539" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.232239 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs" event={"ID":"c807406f-80fb-422b-a68f-e9706da2ac42","Type":"ContainerStarted","Data":"a7c7584684b66556ecda7ad260c71e6906e806f64b30bd4b892b9dd3b5f61535"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.232572 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.236545 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" event={"ID":"c10a734c-970c-42dd-aa15-a27dd68941e1","Type":"ContainerStarted","Data":"0c7f31831c3ce7a9772a85bbb8754239ab2163e1faee21115ce2fc9339dd6019"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.237618 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.246851 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw" event={"ID":"6c192ef8-b774-486f-bb69-d73e8b89989e","Type":"ContainerStarted","Data":"abb9e3e38abea7b631a6d12c40024b834292e7d3d5b17c71265cefb47020b52a"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.247842 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.259026 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lxkkf" podStartSLOduration=31.136483592 podStartE2EDuration="59.259007616s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:46.367773637 +0000 UTC m=+957.009497252" lastFinishedPulling="2025-12-01 09:48:14.490297641 +0000 UTC m=+985.132021276" observedRunningTime="2025-12-01 09:48:40.252718642 +0000 UTC m=+1010.894442247" watchObservedRunningTime="2025-12-01 09:48:40.259007616 +0000 UTC m=+1010.900731231" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.263512 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh" event={"ID":"e88cb01f-84f3-4cdc-9d5d-f283f883868e","Type":"ContainerStarted","Data":"c0e30953cc71d9eab54e188757ef96a6cade801264d02652004d7466d8de5e28"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.264615 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.268232 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw" event={"ID":"48cfc1f9-dbcb-4ff7-88b7-aa7709648627","Type":"ContainerStarted","Data":"a5ae25a49c2e9096eeba6c38f42e5ff0de27b777126a9c341c59dd9f4540f141"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.269361 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.287627 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" event={"ID":"83542dc0-212d-4257-935c-aced954e9157","Type":"ContainerStarted","Data":"841b5ce45734080ac2fdc13d81cb76d8a273d6e7f9d251132eba40b77d40ebcd"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.289291 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.307540 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz" event={"ID":"e32cc225-71ff-4edf-8e11-ac7abf7afe27","Type":"ContainerStarted","Data":"66b59032b24907e90430633b6d2edf852f7e1d546adcf342bc856e4246c2ba6f"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.317748 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" event={"ID":"7dd39823-94d3-4a96-90e4-ada73223c4b0","Type":"ContainerStarted","Data":"3cb505cdf8511073014248719e2c0fa161b366865e2f22161e33ba76b8e587b3"} Dec 01 09:48:40 crc kubenswrapper[4933]: E1201 09:48:40.320225 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7\\\"\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" podUID="7dd39823-94d3-4a96-90e4-ada73223c4b0" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.321159 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs" podStartSLOduration=7.275332899 podStartE2EDuration="58.321135858s" podCreationTimestamp="2025-12-01 09:47:42 +0000 UTC" firstStartedPulling="2025-12-01 09:47:46.618106427 +0000 UTC m=+957.259830042" lastFinishedPulling="2025-12-01 09:48:37.663909386 +0000 UTC m=+1008.305633001" observedRunningTime="2025-12-01 09:48:40.283757292 +0000 UTC m=+1010.925480897" watchObservedRunningTime="2025-12-01 09:48:40.321135858 +0000 UTC m=+1010.962859473" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.321717 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cpthv" podStartSLOduration=31.148011844 podStartE2EDuration="59.321710572s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:46.316301536 +0000 UTC m=+956.958025151" lastFinishedPulling="2025-12-01 09:48:14.490000264 +0000 UTC m=+985.131723879" observedRunningTime="2025-12-01 09:48:40.315583312 +0000 UTC m=+1010.957306937" watchObservedRunningTime="2025-12-01 09:48:40.321710572 +0000 UTC m=+1010.963434187" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.338238 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" event={"ID":"0bd5ca15-126a-4c31-814b-b0390dc01b3c","Type":"ContainerStarted","Data":"566d2657e855588585988f6ee6b17a3010b004f904d7d9c00ce7c5bdbdb30ac7"} Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.340188 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" Dec 01 09:48:40 crc kubenswrapper[4933]: E1201 09:48:40.340460 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" podUID="96699ea8-fc44-4dc2-a6f2-f2109d091097" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.372994 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.374544 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.543421 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" podStartSLOduration=8.521152119 podStartE2EDuration="59.5434024s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:47.13842895 +0000 UTC m=+957.780152565" lastFinishedPulling="2025-12-01 09:48:38.160679231 +0000 UTC m=+1008.802402846" observedRunningTime="2025-12-01 09:48:40.539816263 +0000 UTC m=+1011.181539878" watchObservedRunningTime="2025-12-01 09:48:40.5434024 +0000 UTC m=+1011.185126015" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.544924 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8rdcd" podStartSLOduration=28.62377486 podStartE2EDuration="59.544915088s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:47.138080012 +0000 UTC m=+957.779803627" lastFinishedPulling="2025-12-01 09:48:18.05922024 +0000 UTC m=+988.700943855" observedRunningTime="2025-12-01 09:48:40.347704049 +0000 UTC m=+1010.989427664" watchObservedRunningTime="2025-12-01 09:48:40.544915088 +0000 UTC m=+1011.186638713" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.843809 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh" podStartSLOduration=8.564059829 podStartE2EDuration="59.843775306s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:46.382700872 +0000 UTC m=+957.024424487" lastFinishedPulling="2025-12-01 09:48:37.662416349 +0000 UTC m=+1008.304139964" observedRunningTime="2025-12-01 09:48:40.807751944 +0000 UTC m=+1011.449475559" watchObservedRunningTime="2025-12-01 09:48:40.843775306 +0000 UTC m=+1011.485498911" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.844499 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw" podStartSLOduration=8.285416736 podStartE2EDuration="58.844492204s" podCreationTimestamp="2025-12-01 09:47:42 +0000 UTC" firstStartedPulling="2025-12-01 09:47:47.106365415 +0000 UTC m=+957.748089030" lastFinishedPulling="2025-12-01 09:48:37.665440873 +0000 UTC m=+1008.307164498" observedRunningTime="2025-12-01 09:48:40.841768478 +0000 UTC m=+1011.483492093" watchObservedRunningTime="2025-12-01 09:48:40.844492204 +0000 UTC m=+1011.486215819" Dec 01 09:48:40 crc kubenswrapper[4933]: I1201 09:48:40.872098 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-w92f7" podStartSLOduration=29.015823269 podStartE2EDuration="59.872069029s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:47.203421371 +0000 UTC m=+957.845144986" lastFinishedPulling="2025-12-01 09:48:18.059667131 +0000 UTC m=+988.701390746" observedRunningTime="2025-12-01 09:48:40.868969343 +0000 UTC m=+1011.510692968" watchObservedRunningTime="2025-12-01 09:48:40.872069029 +0000 UTC m=+1011.513792644" Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.014085 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-frx4s" podStartSLOduration=27.572339651 podStartE2EDuration="1m0.014053527s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:47.141424403 +0000 UTC m=+957.783148018" lastFinishedPulling="2025-12-01 09:48:19.583138289 +0000 UTC m=+990.224861894" observedRunningTime="2025-12-01 09:48:40.9215015 +0000 UTC m=+1011.563225115" watchObservedRunningTime="2025-12-01 09:48:41.014053527 +0000 UTC m=+1011.655777142" Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.347119 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr" event={"ID":"e1f14086-5509-48fe-a88c-c2717009ef93","Type":"ContainerStarted","Data":"e2055d44e10fd2bcaf32a77b5bde1218d899f74c889528d3b7a989513209f779"} Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.347288 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr" Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.349194 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2" event={"ID":"b303701b-30bc-4779-b1fa-f574bd6cce65","Type":"ContainerStarted","Data":"ec362a1c7ab1439aefdae5ac7a80810420a8f0546fce1fa7736d32873ff0c08e"} Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.349892 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2" Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.353137 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl" event={"ID":"9a84bd2a-303d-492c-b507-61fa590290d1","Type":"ContainerStarted","Data":"16d70cf1f04aa92daa283bb27356c6c022492c6cb91e63b677e7ffdb11e29ba4"} Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.353185 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl" Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.355021 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv" event={"ID":"2550654d-3a84-420e-bcaa-75a2f3c88dec","Type":"ContainerStarted","Data":"a05baea8278d5b0fed6054e6a9dfaca944c0f08504e286cf94980544505fd08b"} Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.355586 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv" Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.357862 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw" event={"ID":"6c192ef8-b774-486f-bb69-d73e8b89989e","Type":"ContainerStarted","Data":"9ce258935e5cc8d9bcdf2e56fd259e722e5714b5865eddccd488a72b0a3ff328"} Dec 01 09:48:41 crc kubenswrapper[4933]: E1201 09:48:41.361500 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7\\\"\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" podUID="7dd39823-94d3-4a96-90e4-ada73223c4b0" Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.380653 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr" podStartSLOduration=8.928846872 podStartE2EDuration="1m0.380621773s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:46.367234773 +0000 UTC m=+957.008958388" lastFinishedPulling="2025-12-01 09:48:37.819009664 +0000 UTC m=+1008.460733289" observedRunningTime="2025-12-01 09:48:41.374160175 +0000 UTC m=+1012.015883790" watchObservedRunningTime="2025-12-01 09:48:41.380621773 +0000 UTC m=+1012.022345388" Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.411106 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7rjkh" Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.453975 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2" podStartSLOduration=8.105368338 podStartE2EDuration="1m0.453953979s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:47.064409227 +0000 UTC m=+957.706132842" lastFinishedPulling="2025-12-01 09:48:39.412994878 +0000 UTC m=+1010.054718483" observedRunningTime="2025-12-01 09:48:41.407226154 +0000 UTC m=+1012.048949769" watchObservedRunningTime="2025-12-01 09:48:41.453953979 +0000 UTC m=+1012.095677594" Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.455385 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl" podStartSLOduration=7.84941809 podStartE2EDuration="1m0.455378164s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:47.118614985 +0000 UTC m=+957.760338600" lastFinishedPulling="2025-12-01 09:48:39.724575059 +0000 UTC m=+1010.366298674" observedRunningTime="2025-12-01 09:48:41.449436959 +0000 UTC m=+1012.091160574" watchObservedRunningTime="2025-12-01 09:48:41.455378164 +0000 UTC m=+1012.097101779" Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.610111 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw" podStartSLOduration=8.82952889 podStartE2EDuration="1m0.610086642s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:47.137610269 +0000 UTC m=+957.779333884" lastFinishedPulling="2025-12-01 09:48:38.918168021 +0000 UTC m=+1009.559891636" observedRunningTime="2025-12-01 09:48:41.607814807 +0000 UTC m=+1012.249538422" watchObservedRunningTime="2025-12-01 09:48:41.610086642 +0000 UTC m=+1012.251810257" Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.636351 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv" podStartSLOduration=7.836712488 podStartE2EDuration="1m0.636295824s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:46.678211459 +0000 UTC m=+957.319935074" lastFinishedPulling="2025-12-01 09:48:39.477794795 +0000 UTC m=+1010.119518410" observedRunningTime="2025-12-01 09:48:41.635615428 +0000 UTC m=+1012.277339043" watchObservedRunningTime="2025-12-01 09:48:41.636295824 +0000 UTC m=+1012.278019439" Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.741998 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:48:41 crc kubenswrapper[4933]: I1201 09:48:41.742119 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:48:42 crc kubenswrapper[4933]: I1201 09:48:42.147201 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4bfbh" Dec 01 09:48:42 crc kubenswrapper[4933]: I1201 09:48:42.372546 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz" event={"ID":"e32cc225-71ff-4edf-8e11-ac7abf7afe27","Type":"ContainerStarted","Data":"caf6bf29ddba966556180f5e0d58b66bccdb8b438e3167c845ee5bb282bd97c0"} Dec 01 09:48:42 crc kubenswrapper[4933]: I1201 09:48:42.375064 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw" Dec 01 09:48:42 crc kubenswrapper[4933]: I1201 09:48:42.384279 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz" Dec 01 09:48:42 crc kubenswrapper[4933]: I1201 09:48:42.392744 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bmhhw" Dec 01 09:48:42 crc kubenswrapper[4933]: I1201 09:48:42.411980 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz" podStartSLOduration=7.536444275 podStartE2EDuration="1m1.411940219s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:47:47.107955693 +0000 UTC m=+957.749679308" lastFinishedPulling="2025-12-01 09:48:40.983451637 +0000 UTC m=+1011.625175252" observedRunningTime="2025-12-01 09:48:42.403872621 +0000 UTC m=+1013.045596246" watchObservedRunningTime="2025-12-01 09:48:42.411940219 +0000 UTC m=+1013.053663834" Dec 01 09:48:43 crc kubenswrapper[4933]: I1201 09:48:43.057639 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-w9jcs" Dec 01 09:48:44 crc kubenswrapper[4933]: I1201 09:48:44.412722 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp" event={"ID":"3aa898e5-9bf0-4baf-9c71-261229f0baf0","Type":"ContainerStarted","Data":"0a19cc8f5da84d2ff545d6575201bc56eebfa5d47f8a32691e216df7c00c3e07"} Dec 01 09:48:44 crc kubenswrapper[4933]: I1201 09:48:44.434294 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhlrp" podStartSLOduration=6.273165698 podStartE2EDuration="1m2.434271223s" podCreationTimestamp="2025-12-01 09:47:42 +0000 UTC" firstStartedPulling="2025-12-01 09:47:47.168543457 +0000 UTC m=+957.810267072" lastFinishedPulling="2025-12-01 09:48:43.329648982 +0000 UTC m=+1013.971372597" observedRunningTime="2025-12-01 09:48:44.42964174 +0000 UTC m=+1015.071365375" watchObservedRunningTime="2025-12-01 09:48:44.434271223 +0000 UTC m=+1015.075994838" Dec 01 09:48:44 crc kubenswrapper[4933]: I1201 09:48:44.930744 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-547ff67f67-9fnsd" Dec 01 09:48:51 crc kubenswrapper[4933]: I1201 09:48:51.816541 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-9fvkr" Dec 01 09:48:52 crc kubenswrapper[4933]: I1201 09:48:52.241546 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-94gt2" Dec 01 09:48:52 crc kubenswrapper[4933]: I1201 09:48:52.278252 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mlmgw" Dec 01 09:48:52 crc kubenswrapper[4933]: I1201 09:48:52.314184 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-n5pnz" Dec 01 09:48:52 crc kubenswrapper[4933]: I1201 09:48:52.779121 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7c9rv" Dec 01 09:48:52 crc kubenswrapper[4933]: I1201 09:48:52.808040 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w8tzl" Dec 01 09:48:53 crc kubenswrapper[4933]: I1201 09:48:53.177709 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-b2gcw" Dec 01 09:48:53 crc kubenswrapper[4933]: I1201 09:48:53.604683 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" event={"ID":"96699ea8-fc44-4dc2-a6f2-f2109d091097","Type":"ContainerStarted","Data":"50bb8e31e842c457ceff440f308db819b43e56f5e54c3c56624b8a4a4eae5150"} Dec 01 09:48:53 crc kubenswrapper[4933]: I1201 09:48:53.604958 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:48:53 crc kubenswrapper[4933]: I1201 09:48:53.644112 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" podStartSLOduration=38.776437482 podStartE2EDuration="1m12.64408799s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:48:19.25976613 +0000 UTC m=+989.901489745" lastFinishedPulling="2025-12-01 09:48:53.127416648 +0000 UTC m=+1023.769140253" observedRunningTime="2025-12-01 09:48:53.637839458 +0000 UTC m=+1024.279563073" watchObservedRunningTime="2025-12-01 09:48:53.64408799 +0000 UTC m=+1024.285811605" Dec 01 09:48:58 crc kubenswrapper[4933]: I1201 09:48:58.494353 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd45jrln" Dec 01 09:48:58 crc kubenswrapper[4933]: I1201 09:48:58.642001 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" event={"ID":"7dd39823-94d3-4a96-90e4-ada73223c4b0","Type":"ContainerStarted","Data":"5cc488a6abc01cdd44b8e071fd3dda558e697d7cbe9cb6a22f0057829d003e01"} Dec 01 09:48:58 crc kubenswrapper[4933]: I1201 09:48:58.642234 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:48:58 crc kubenswrapper[4933]: I1201 09:48:58.660523 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" podStartSLOduration=39.074461001 podStartE2EDuration="1m17.660502266s" podCreationTimestamp="2025-12-01 09:47:41 +0000 UTC" firstStartedPulling="2025-12-01 09:48:19.253676291 +0000 UTC m=+989.895399906" lastFinishedPulling="2025-12-01 09:48:57.839717556 +0000 UTC m=+1028.481441171" observedRunningTime="2025-12-01 09:48:58.657959553 +0000 UTC m=+1029.299683168" watchObservedRunningTime="2025-12-01 09:48:58.660502266 +0000 UTC m=+1029.302225881" Dec 01 09:49:07 crc kubenswrapper[4933]: I1201 09:49:07.918977 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hcgq6" Dec 01 09:49:11 crc kubenswrapper[4933]: I1201 09:49:11.741223 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:49:11 crc kubenswrapper[4933]: I1201 09:49:11.741651 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:49:11 crc kubenswrapper[4933]: I1201 09:49:11.741724 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:49:11 crc kubenswrapper[4933]: I1201 09:49:11.742493 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9380cff48ee91161c6b7a930159a88a7b204cb44f727f0c73879abbb5f388b3e"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:49:11 crc kubenswrapper[4933]: I1201 09:49:11.742556 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://9380cff48ee91161c6b7a930159a88a7b204cb44f727f0c73879abbb5f388b3e" gracePeriod=600 Dec 01 09:49:12 crc kubenswrapper[4933]: I1201 09:49:12.745609 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="9380cff48ee91161c6b7a930159a88a7b204cb44f727f0c73879abbb5f388b3e" exitCode=0 Dec 01 09:49:12 crc kubenswrapper[4933]: I1201 09:49:12.746280 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"9380cff48ee91161c6b7a930159a88a7b204cb44f727f0c73879abbb5f388b3e"} Dec 01 09:49:12 crc kubenswrapper[4933]: I1201 09:49:12.746517 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"9a07b0704942b02814f9e2cbae890ab2665cd7af25f4178c9003a8e4c8ac846a"} Dec 01 09:49:12 crc kubenswrapper[4933]: I1201 09:49:12.746546 4933 scope.go:117] "RemoveContainer" containerID="9b9c9050f180243e388ba92ad81faccae53ee3940480103d59e4ab9a26921bbd" Dec 01 09:49:21 crc kubenswrapper[4933]: I1201 09:49:21.881364 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xcc8g"] Dec 01 09:49:21 crc kubenswrapper[4933]: I1201 09:49:21.884650 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xcc8g" Dec 01 09:49:21 crc kubenswrapper[4933]: I1201 09:49:21.891189 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xcc8g"] Dec 01 09:49:21 crc kubenswrapper[4933]: I1201 09:49:21.900788 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 01 09:49:21 crc kubenswrapper[4933]: I1201 09:49:21.901037 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-xpnt2" Dec 01 09:49:21 crc kubenswrapper[4933]: I1201 09:49:21.901496 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 01 09:49:21 crc kubenswrapper[4933]: I1201 09:49:21.903590 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 01 09:49:21 crc kubenswrapper[4933]: I1201 09:49:21.971730 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7a0d9ea-050c-4472-abc2-709c1052a058-config\") pod \"dnsmasq-dns-675f4bcbfc-xcc8g\" (UID: \"f7a0d9ea-050c-4472-abc2-709c1052a058\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xcc8g" Dec 01 09:49:21 crc kubenswrapper[4933]: I1201 09:49:21.971908 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxxmv\" (UniqueName: \"kubernetes.io/projected/f7a0d9ea-050c-4472-abc2-709c1052a058-kube-api-access-sxxmv\") pod \"dnsmasq-dns-675f4bcbfc-xcc8g\" (UID: \"f7a0d9ea-050c-4472-abc2-709c1052a058\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xcc8g" Dec 01 09:49:21 crc kubenswrapper[4933]: I1201 09:49:21.994820 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-crjgc"] Dec 01 09:49:21 crc kubenswrapper[4933]: I1201 09:49:21.998990 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.004830 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.076895 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d9a2ca-cef0-4de2-9067-93754ed34ba3-config\") pod \"dnsmasq-dns-78dd6ddcc-crjgc\" (UID: \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.076930 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98d9a2ca-cef0-4de2-9067-93754ed34ba3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-crjgc\" (UID: \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.076985 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7a0d9ea-050c-4472-abc2-709c1052a058-config\") pod \"dnsmasq-dns-675f4bcbfc-xcc8g\" (UID: \"f7a0d9ea-050c-4472-abc2-709c1052a058\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xcc8g" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.077023 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkxg4\" (UniqueName: \"kubernetes.io/projected/98d9a2ca-cef0-4de2-9067-93754ed34ba3-kube-api-access-fkxg4\") pod \"dnsmasq-dns-78dd6ddcc-crjgc\" (UID: \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.077053 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxxmv\" (UniqueName: \"kubernetes.io/projected/f7a0d9ea-050c-4472-abc2-709c1052a058-kube-api-access-sxxmv\") pod \"dnsmasq-dns-675f4bcbfc-xcc8g\" (UID: \"f7a0d9ea-050c-4472-abc2-709c1052a058\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xcc8g" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.078293 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7a0d9ea-050c-4472-abc2-709c1052a058-config\") pod \"dnsmasq-dns-675f4bcbfc-xcc8g\" (UID: \"f7a0d9ea-050c-4472-abc2-709c1052a058\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xcc8g" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.093529 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-crjgc"] Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.118221 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxxmv\" (UniqueName: \"kubernetes.io/projected/f7a0d9ea-050c-4472-abc2-709c1052a058-kube-api-access-sxxmv\") pod \"dnsmasq-dns-675f4bcbfc-xcc8g\" (UID: \"f7a0d9ea-050c-4472-abc2-709c1052a058\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xcc8g" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.178219 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d9a2ca-cef0-4de2-9067-93754ed34ba3-config\") pod \"dnsmasq-dns-78dd6ddcc-crjgc\" (UID: \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.178279 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98d9a2ca-cef0-4de2-9067-93754ed34ba3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-crjgc\" (UID: \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.178376 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkxg4\" (UniqueName: \"kubernetes.io/projected/98d9a2ca-cef0-4de2-9067-93754ed34ba3-kube-api-access-fkxg4\") pod \"dnsmasq-dns-78dd6ddcc-crjgc\" (UID: \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.180404 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d9a2ca-cef0-4de2-9067-93754ed34ba3-config\") pod \"dnsmasq-dns-78dd6ddcc-crjgc\" (UID: \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.180404 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98d9a2ca-cef0-4de2-9067-93754ed34ba3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-crjgc\" (UID: \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.204814 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkxg4\" (UniqueName: \"kubernetes.io/projected/98d9a2ca-cef0-4de2-9067-93754ed34ba3-kube-api-access-fkxg4\") pod \"dnsmasq-dns-78dd6ddcc-crjgc\" (UID: \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.214473 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xcc8g" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.378395 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.798103 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-crjgc"] Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.846206 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" event={"ID":"98d9a2ca-cef0-4de2-9067-93754ed34ba3","Type":"ContainerStarted","Data":"08bd0724879652a1a021a06316920facf9acf670cbc38752deb4b7352e448f16"} Dec 01 09:49:22 crc kubenswrapper[4933]: I1201 09:49:22.846458 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xcc8g"] Dec 01 09:49:22 crc kubenswrapper[4933]: W1201 09:49:22.851583 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7a0d9ea_050c_4472_abc2_709c1052a058.slice/crio-ecd5adb70b8660ed1fb20cdd126e183c0072e8d30fa65845944e5cd5578b8657 WatchSource:0}: Error finding container ecd5adb70b8660ed1fb20cdd126e183c0072e8d30fa65845944e5cd5578b8657: Status 404 returned error can't find the container with id ecd5adb70b8660ed1fb20cdd126e183c0072e8d30fa65845944e5cd5578b8657 Dec 01 09:49:23 crc kubenswrapper[4933]: I1201 09:49:23.935550 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xcc8g" event={"ID":"f7a0d9ea-050c-4472-abc2-709c1052a058","Type":"ContainerStarted","Data":"ecd5adb70b8660ed1fb20cdd126e183c0072e8d30fa65845944e5cd5578b8657"} Dec 01 09:49:24 crc kubenswrapper[4933]: I1201 09:49:24.803164 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xcc8g"] Dec 01 09:49:24 crc kubenswrapper[4933]: I1201 09:49:24.830347 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qz445"] Dec 01 09:49:24 crc kubenswrapper[4933]: I1201 09:49:24.833377 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:49:24 crc kubenswrapper[4933]: I1201 09:49:24.853543 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qz445"] Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.005120 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqbdx\" (UniqueName: \"kubernetes.io/projected/01a1a54d-3a8e-4dfd-aed1-d904670bce61-kube-api-access-lqbdx\") pod \"dnsmasq-dns-5ccc8479f9-qz445\" (UID: \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.005192 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a1a54d-3a8e-4dfd-aed1-d904670bce61-config\") pod \"dnsmasq-dns-5ccc8479f9-qz445\" (UID: \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.005229 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a1a54d-3a8e-4dfd-aed1-d904670bce61-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-qz445\" (UID: \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.106511 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqbdx\" (UniqueName: \"kubernetes.io/projected/01a1a54d-3a8e-4dfd-aed1-d904670bce61-kube-api-access-lqbdx\") pod \"dnsmasq-dns-5ccc8479f9-qz445\" (UID: \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.106583 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a1a54d-3a8e-4dfd-aed1-d904670bce61-config\") pod \"dnsmasq-dns-5ccc8479f9-qz445\" (UID: \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.106636 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a1a54d-3a8e-4dfd-aed1-d904670bce61-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-qz445\" (UID: \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.107428 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a1a54d-3a8e-4dfd-aed1-d904670bce61-config\") pod \"dnsmasq-dns-5ccc8479f9-qz445\" (UID: \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.109772 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a1a54d-3a8e-4dfd-aed1-d904670bce61-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-qz445\" (UID: \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.135118 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqbdx\" (UniqueName: \"kubernetes.io/projected/01a1a54d-3a8e-4dfd-aed1-d904670bce61-kube-api-access-lqbdx\") pod \"dnsmasq-dns-5ccc8479f9-qz445\" (UID: \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.193936 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-crjgc"] Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.196057 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.229448 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2z4vq"] Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.231525 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.253758 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2z4vq"] Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.411210 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89ghp\" (UniqueName: \"kubernetes.io/projected/cb7463c3-8d88-4334-8627-a4f62371faf8-kube-api-access-89ghp\") pod \"dnsmasq-dns-57d769cc4f-2z4vq\" (UID: \"cb7463c3-8d88-4334-8627-a4f62371faf8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.411270 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2z4vq\" (UID: \"cb7463c3-8d88-4334-8627-a4f62371faf8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.411425 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-config\") pod \"dnsmasq-dns-57d769cc4f-2z4vq\" (UID: \"cb7463c3-8d88-4334-8627-a4f62371faf8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.516377 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89ghp\" (UniqueName: \"kubernetes.io/projected/cb7463c3-8d88-4334-8627-a4f62371faf8-kube-api-access-89ghp\") pod \"dnsmasq-dns-57d769cc4f-2z4vq\" (UID: \"cb7463c3-8d88-4334-8627-a4f62371faf8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.516472 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2z4vq\" (UID: \"cb7463c3-8d88-4334-8627-a4f62371faf8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.516724 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-config\") pod \"dnsmasq-dns-57d769cc4f-2z4vq\" (UID: \"cb7463c3-8d88-4334-8627-a4f62371faf8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.519287 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-config\") pod \"dnsmasq-dns-57d769cc4f-2z4vq\" (UID: \"cb7463c3-8d88-4334-8627-a4f62371faf8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.520123 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2z4vq\" (UID: \"cb7463c3-8d88-4334-8627-a4f62371faf8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.541799 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89ghp\" (UniqueName: \"kubernetes.io/projected/cb7463c3-8d88-4334-8627-a4f62371faf8-kube-api-access-89ghp\") pod \"dnsmasq-dns-57d769cc4f-2z4vq\" (UID: \"cb7463c3-8d88-4334-8627-a4f62371faf8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.623781 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.804605 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qz445"] Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.982415 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2z4vq"] Dec 01 09:49:25 crc kubenswrapper[4933]: I1201 09:49:25.988352 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" event={"ID":"01a1a54d-3a8e-4dfd-aed1-d904670bce61","Type":"ContainerStarted","Data":"d0887282ea0ab4b31ecd152821667507267cb6fe5763504c1e9031c768550184"} Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.026794 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.028991 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.032982 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.033124 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.033537 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.033420 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.033609 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.033788 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.034291 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r782b" Dec 01 09:49:26 crc kubenswrapper[4933]: W1201 09:49:26.040937 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb7463c3_8d88_4334_8627_a4f62371faf8.slice/crio-8f1322c328882a3249562d54bf7b54189c82b83e3926ba59723d5d99571540e1 WatchSource:0}: Error finding container 8f1322c328882a3249562d54bf7b54189c82b83e3926ba59723d5d99571540e1: Status 404 returned error can't find the container with id 8f1322c328882a3249562d54bf7b54189c82b83e3926ba59723d5d99571540e1 Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.050571 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.208643 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.208698 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.208726 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.208762 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8f90456-f375-447c-8f32-8ca629a28861-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.208786 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.208806 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79bgq\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-kube-api-access-79bgq\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.208824 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.208856 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.208874 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.208900 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8f90456-f375-447c-8f32-8ca629a28861-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.208928 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.316006 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.316116 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.316240 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8f90456-f375-447c-8f32-8ca629a28861-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.316339 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.316411 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.316451 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.316516 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.316628 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8f90456-f375-447c-8f32-8ca629a28861-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.316679 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.316721 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79bgq\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-kube-api-access-79bgq\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.316752 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.316743 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.317452 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.317487 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.317859 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.318528 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.318626 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.324973 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8f90456-f375-447c-8f32-8ca629a28861-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.327122 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.327705 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.341733 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8f90456-f375-447c-8f32-8ca629a28861-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.345751 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79bgq\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-kube-api-access-79bgq\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.349569 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.372073 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.376079 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.378119 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.381829 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.382230 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.384622 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.384873 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.385053 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.390890 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6g7jz" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.393290 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.414985 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.554629 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.554759 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.554791 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.554817 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.554844 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.554868 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmcdh\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-kube-api-access-tmcdh\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.554900 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.554931 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-config-data\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.555006 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.555056 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.555112 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.656448 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.656526 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-config-data\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.656563 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.656610 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.656654 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.656686 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.656763 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.656794 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.656827 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.656852 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.656873 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmcdh\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-kube-api-access-tmcdh\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.657479 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.657742 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-config-data\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.657791 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.657813 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.658030 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.671466 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.674733 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.675819 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.676470 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.681591 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.688743 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmcdh\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-kube-api-access-tmcdh\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.754717 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " pod="openstack/rabbitmq-server-0" Dec 01 09:49:26 crc kubenswrapper[4933]: I1201 09:49:26.925233 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.030894 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" event={"ID":"cb7463c3-8d88-4334-8627-a4f62371faf8","Type":"ContainerStarted","Data":"8f1322c328882a3249562d54bf7b54189c82b83e3926ba59723d5d99571540e1"} Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.033751 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:49:27 crc kubenswrapper[4933]: W1201 09:49:27.046205 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8f90456_f375_447c_8f32_8ca629a28861.slice/crio-d461da9c89612e7a1026ab43c9ed5302c0af2c1a4941a7d4082105bc8f9d10fe WatchSource:0}: Error finding container d461da9c89612e7a1026ab43c9ed5302c0af2c1a4941a7d4082105bc8f9d10fe: Status 404 returned error can't find the container with id d461da9c89612e7a1026ab43c9ed5302c0af2c1a4941a7d4082105bc8f9d10fe Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.570947 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.729002 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.733565 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.737563 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.738259 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.739221 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-68zkm" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.739252 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.740388 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.750488 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.813661 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba27bfa-74d8-4df5-8217-666a02132516-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.814925 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqws4\" (UniqueName: \"kubernetes.io/projected/1ba27bfa-74d8-4df5-8217-666a02132516-kube-api-access-gqws4\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.815001 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1ba27bfa-74d8-4df5-8217-666a02132516-kolla-config\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.815030 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba27bfa-74d8-4df5-8217-666a02132516-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.815084 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.815146 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ba27bfa-74d8-4df5-8217-666a02132516-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.815198 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1ba27bfa-74d8-4df5-8217-666a02132516-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.815238 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1ba27bfa-74d8-4df5-8217-666a02132516-config-data-default\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.925570 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1ba27bfa-74d8-4df5-8217-666a02132516-config-data-default\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.925689 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba27bfa-74d8-4df5-8217-666a02132516-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.925728 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqws4\" (UniqueName: \"kubernetes.io/projected/1ba27bfa-74d8-4df5-8217-666a02132516-kube-api-access-gqws4\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.925764 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1ba27bfa-74d8-4df5-8217-666a02132516-kolla-config\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.925787 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba27bfa-74d8-4df5-8217-666a02132516-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.925823 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.925853 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ba27bfa-74d8-4df5-8217-666a02132516-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.925875 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1ba27bfa-74d8-4df5-8217-666a02132516-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.926765 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1ba27bfa-74d8-4df5-8217-666a02132516-config-data-default\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.927028 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1ba27bfa-74d8-4df5-8217-666a02132516-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.927946 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.928951 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ba27bfa-74d8-4df5-8217-666a02132516-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.929254 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1ba27bfa-74d8-4df5-8217-666a02132516-kolla-config\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.941026 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba27bfa-74d8-4df5-8217-666a02132516-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.949315 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba27bfa-74d8-4df5-8217-666a02132516-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.950521 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqws4\" (UniqueName: \"kubernetes.io/projected/1ba27bfa-74d8-4df5-8217-666a02132516-kube-api-access-gqws4\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:27 crc kubenswrapper[4933]: I1201 09:49:27.962966 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"1ba27bfa-74d8-4df5-8217-666a02132516\") " pod="openstack/openstack-galera-0" Dec 01 09:49:28 crc kubenswrapper[4933]: I1201 09:49:28.088011 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 01 09:49:28 crc kubenswrapper[4933]: I1201 09:49:28.215705 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8f90456-f375-447c-8f32-8ca629a28861","Type":"ContainerStarted","Data":"d461da9c89612e7a1026ab43c9ed5302c0af2c1a4941a7d4082105bc8f9d10fe"} Dec 01 09:49:28 crc kubenswrapper[4933]: I1201 09:49:28.219729 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec","Type":"ContainerStarted","Data":"752c53b2a372f6ed1ebf0a780319dc9129efb939adb3423f80370693eb6b7cc9"} Dec 01 09:49:28 crc kubenswrapper[4933]: I1201 09:49:28.668099 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 01 09:49:28 crc kubenswrapper[4933]: I1201 09:49:28.918052 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 01 09:49:28 crc kubenswrapper[4933]: I1201 09:49:28.922040 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:28 crc kubenswrapper[4933]: I1201 09:49:28.925334 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-cz9mw" Dec 01 09:49:28 crc kubenswrapper[4933]: I1201 09:49:28.925500 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 01 09:49:28 crc kubenswrapper[4933]: I1201 09:49:28.925611 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 01 09:49:28 crc kubenswrapper[4933]: I1201 09:49:28.928191 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 01 09:49:28 crc kubenswrapper[4933]: I1201 09:49:28.936371 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.087070 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/55a605ff-7d52-4d80-bd32-6301d0c696c1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.087159 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/55a605ff-7d52-4d80-bd32-6301d0c696c1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.087199 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/55a605ff-7d52-4d80-bd32-6301d0c696c1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.087227 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7ksv\" (UniqueName: \"kubernetes.io/projected/55a605ff-7d52-4d80-bd32-6301d0c696c1-kube-api-access-j7ksv\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.087404 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.087476 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/55a605ff-7d52-4d80-bd32-6301d0c696c1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.087497 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a605ff-7d52-4d80-bd32-6301d0c696c1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.087551 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a605ff-7d52-4d80-bd32-6301d0c696c1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.188516 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.189319 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/55a605ff-7d52-4d80-bd32-6301d0c696c1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.189375 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a605ff-7d52-4d80-bd32-6301d0c696c1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.189420 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a605ff-7d52-4d80-bd32-6301d0c696c1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.189490 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/55a605ff-7d52-4d80-bd32-6301d0c696c1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.189550 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/55a605ff-7d52-4d80-bd32-6301d0c696c1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.189576 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/55a605ff-7d52-4d80-bd32-6301d0c696c1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.189608 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7ksv\" (UniqueName: \"kubernetes.io/projected/55a605ff-7d52-4d80-bd32-6301d0c696c1-kube-api-access-j7ksv\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.189682 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.190184 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.190514 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.192436 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/55a605ff-7d52-4d80-bd32-6301d0c696c1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.201493 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.201846 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-9sb56" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.202045 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.205645 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a605ff-7d52-4d80-bd32-6301d0c696c1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.206643 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/55a605ff-7d52-4d80-bd32-6301d0c696c1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.240502 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/55a605ff-7d52-4d80-bd32-6301d0c696c1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.253345 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.264909 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/55a605ff-7d52-4d80-bd32-6301d0c696c1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.266293 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a605ff-7d52-4d80-bd32-6301d0c696c1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.278907 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7ksv\" (UniqueName: \"kubernetes.io/projected/55a605ff-7d52-4d80-bd32-6301d0c696c1-kube-api-access-j7ksv\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.280812 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1ba27bfa-74d8-4df5-8217-666a02132516","Type":"ContainerStarted","Data":"7c26a18dfef4daca2c578a68d991b2446d9344535cd7aec1871626a5984c8d8e"} Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.293049 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12be8eec-c6b1-4606-83de-e19ac2ab17eb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.293103 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hr99\" (UniqueName: \"kubernetes.io/projected/12be8eec-c6b1-4606-83de-e19ac2ab17eb-kube-api-access-8hr99\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.293155 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12be8eec-c6b1-4606-83de-e19ac2ab17eb-config-data\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.293186 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/12be8eec-c6b1-4606-83de-e19ac2ab17eb-kolla-config\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.293221 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/12be8eec-c6b1-4606-83de-e19ac2ab17eb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.310814 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"55a605ff-7d52-4d80-bd32-6301d0c696c1\") " pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.394507 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12be8eec-c6b1-4606-83de-e19ac2ab17eb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.394570 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hr99\" (UniqueName: \"kubernetes.io/projected/12be8eec-c6b1-4606-83de-e19ac2ab17eb-kube-api-access-8hr99\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.394614 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12be8eec-c6b1-4606-83de-e19ac2ab17eb-config-data\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.394644 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/12be8eec-c6b1-4606-83de-e19ac2ab17eb-kolla-config\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.394683 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/12be8eec-c6b1-4606-83de-e19ac2ab17eb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.408450 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12be8eec-c6b1-4606-83de-e19ac2ab17eb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.410258 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/12be8eec-c6b1-4606-83de-e19ac2ab17eb-kolla-config\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.420164 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12be8eec-c6b1-4606-83de-e19ac2ab17eb-config-data\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.448201 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/12be8eec-c6b1-4606-83de-e19ac2ab17eb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.463366 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hr99\" (UniqueName: \"kubernetes.io/projected/12be8eec-c6b1-4606-83de-e19ac2ab17eb-kube-api-access-8hr99\") pod \"memcached-0\" (UID: \"12be8eec-c6b1-4606-83de-e19ac2ab17eb\") " pod="openstack/memcached-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.561205 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 01 09:49:29 crc kubenswrapper[4933]: I1201 09:49:29.734510 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 01 09:49:31 crc kubenswrapper[4933]: I1201 09:49:31.154432 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 09:49:31 crc kubenswrapper[4933]: I1201 09:49:31.156431 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 09:49:31 crc kubenswrapper[4933]: I1201 09:49:31.161206 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-6c9nm" Dec 01 09:49:31 crc kubenswrapper[4933]: I1201 09:49:31.185810 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 09:49:31 crc kubenswrapper[4933]: I1201 09:49:31.207424 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2cp7\" (UniqueName: \"kubernetes.io/projected/c7de84cc-bb1a-45ba-bbba-acc140d0facc-kube-api-access-d2cp7\") pod \"kube-state-metrics-0\" (UID: \"c7de84cc-bb1a-45ba-bbba-acc140d0facc\") " pod="openstack/kube-state-metrics-0" Dec 01 09:49:31 crc kubenswrapper[4933]: I1201 09:49:31.354536 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2cp7\" (UniqueName: \"kubernetes.io/projected/c7de84cc-bb1a-45ba-bbba-acc140d0facc-kube-api-access-d2cp7\") pod \"kube-state-metrics-0\" (UID: \"c7de84cc-bb1a-45ba-bbba-acc140d0facc\") " pod="openstack/kube-state-metrics-0" Dec 01 09:49:31 crc kubenswrapper[4933]: I1201 09:49:31.376776 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2cp7\" (UniqueName: \"kubernetes.io/projected/c7de84cc-bb1a-45ba-bbba-acc140d0facc-kube-api-access-d2cp7\") pod \"kube-state-metrics-0\" (UID: \"c7de84cc-bb1a-45ba-bbba-acc140d0facc\") " pod="openstack/kube-state-metrics-0" Dec 01 09:49:31 crc kubenswrapper[4933]: I1201 09:49:31.494798 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.188611 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5tgrr"] Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.190801 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.197327 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-l8bgh"] Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.198978 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.206926 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.206942 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-bm8gt" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.209432 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.217393 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5tgrr"] Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.234831 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-l8bgh"] Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.269581 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-var-run-ovn\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.271225 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-scripts\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.271603 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90984aaa-e287-4038-bc14-16debb186a8d-scripts\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.271737 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-ovn-controller-tls-certs\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.271997 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/90984aaa-e287-4038-bc14-16debb186a8d-var-log\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.272203 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/90984aaa-e287-4038-bc14-16debb186a8d-etc-ovs\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.272413 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-var-log-ovn\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.272568 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzv5m\" (UniqueName: \"kubernetes.io/projected/90984aaa-e287-4038-bc14-16debb186a8d-kube-api-access-vzv5m\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.272927 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-combined-ca-bundle\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.273109 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw72w\" (UniqueName: \"kubernetes.io/projected/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-kube-api-access-nw72w\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.273292 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/90984aaa-e287-4038-bc14-16debb186a8d-var-lib\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.273426 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-var-run\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.273547 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90984aaa-e287-4038-bc14-16debb186a8d-var-run\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.377246 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90984aaa-e287-4038-bc14-16debb186a8d-var-run\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.377400 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-var-run-ovn\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.378100 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90984aaa-e287-4038-bc14-16debb186a8d-var-run\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.377448 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-scripts\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.378185 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-ovn-controller-tls-certs\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.378182 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-var-run-ovn\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.378225 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90984aaa-e287-4038-bc14-16debb186a8d-scripts\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.378254 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/90984aaa-e287-4038-bc14-16debb186a8d-var-log\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.378277 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/90984aaa-e287-4038-bc14-16debb186a8d-etc-ovs\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.378319 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-var-log-ovn\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.378345 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzv5m\" (UniqueName: \"kubernetes.io/projected/90984aaa-e287-4038-bc14-16debb186a8d-kube-api-access-vzv5m\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.378368 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-combined-ca-bundle\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.378396 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw72w\" (UniqueName: \"kubernetes.io/projected/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-kube-api-access-nw72w\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.378421 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/90984aaa-e287-4038-bc14-16debb186a8d-var-lib\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.378469 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-var-run\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.378559 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-var-run\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.379353 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/90984aaa-e287-4038-bc14-16debb186a8d-var-log\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.379473 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/90984aaa-e287-4038-bc14-16debb186a8d-etc-ovs\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.379542 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-var-log-ovn\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.381084 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90984aaa-e287-4038-bc14-16debb186a8d-scripts\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.381210 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/90984aaa-e287-4038-bc14-16debb186a8d-var-lib\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.382040 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-scripts\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.391636 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-ovn-controller-tls-certs\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.391800 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-combined-ca-bundle\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.405895 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzv5m\" (UniqueName: \"kubernetes.io/projected/90984aaa-e287-4038-bc14-16debb186a8d-kube-api-access-vzv5m\") pod \"ovn-controller-ovs-l8bgh\" (UID: \"90984aaa-e287-4038-bc14-16debb186a8d\") " pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.409765 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw72w\" (UniqueName: \"kubernetes.io/projected/a1f2d08e-94f8-47ec-9e7e-a4722b71b609-kube-api-access-nw72w\") pod \"ovn-controller-5tgrr\" (UID: \"a1f2d08e-94f8-47ec-9e7e-a4722b71b609\") " pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.514974 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5tgrr" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.529344 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.776148 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.778088 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.782849 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.783141 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-pkcpj" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.782864 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.783449 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.787873 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.791748 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.897362 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa3b7950-1309-47f9-9372-7932d0ef0ced-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.897428 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3b7950-1309-47f9-9372-7932d0ef0ced-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.897525 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa3b7950-1309-47f9-9372-7932d0ef0ced-config\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.897575 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa3b7950-1309-47f9-9372-7932d0ef0ced-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.897594 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa3b7950-1309-47f9-9372-7932d0ef0ced-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.897618 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs2m8\" (UniqueName: \"kubernetes.io/projected/fa3b7950-1309-47f9-9372-7932d0ef0ced-kube-api-access-bs2m8\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.897637 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fa3b7950-1309-47f9-9372-7932d0ef0ced-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:34 crc kubenswrapper[4933]: I1201 09:49:34.897671 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.000997 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa3b7950-1309-47f9-9372-7932d0ef0ced-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.001087 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa3b7950-1309-47f9-9372-7932d0ef0ced-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.001134 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs2m8\" (UniqueName: \"kubernetes.io/projected/fa3b7950-1309-47f9-9372-7932d0ef0ced-kube-api-access-bs2m8\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.001161 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fa3b7950-1309-47f9-9372-7932d0ef0ced-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.001215 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.001336 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa3b7950-1309-47f9-9372-7932d0ef0ced-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.001403 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3b7950-1309-47f9-9372-7932d0ef0ced-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.001522 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa3b7950-1309-47f9-9372-7932d0ef0ced-config\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.001805 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.002194 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa3b7950-1309-47f9-9372-7932d0ef0ced-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.002813 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa3b7950-1309-47f9-9372-7932d0ef0ced-config\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.003031 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fa3b7950-1309-47f9-9372-7932d0ef0ced-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.007447 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3b7950-1309-47f9-9372-7932d0ef0ced-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.012574 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa3b7950-1309-47f9-9372-7932d0ef0ced-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.016104 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa3b7950-1309-47f9-9372-7932d0ef0ced-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.022015 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs2m8\" (UniqueName: \"kubernetes.io/projected/fa3b7950-1309-47f9-9372-7932d0ef0ced-kube-api-access-bs2m8\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.032645 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fa3b7950-1309-47f9-9372-7932d0ef0ced\") " pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:35 crc kubenswrapper[4933]: I1201 09:49:35.108076 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.286846 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.289264 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.292659 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.293046 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wmp7b" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.293219 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.293553 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.302099 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.474888 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75ef03e2-9526-4184-a3cf-2a5bb26fec93-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.474991 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75ef03e2-9526-4184-a3cf-2a5bb26fec93-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.475389 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75ef03e2-9526-4184-a3cf-2a5bb26fec93-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.475538 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75ef03e2-9526-4184-a3cf-2a5bb26fec93-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.475632 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2ttb\" (UniqueName: \"kubernetes.io/projected/75ef03e2-9526-4184-a3cf-2a5bb26fec93-kube-api-access-s2ttb\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.475778 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ef03e2-9526-4184-a3cf-2a5bb26fec93-config\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.476041 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ef03e2-9526-4184-a3cf-2a5bb26fec93-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.476081 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.578338 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75ef03e2-9526-4184-a3cf-2a5bb26fec93-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.578413 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2ttb\" (UniqueName: \"kubernetes.io/projected/75ef03e2-9526-4184-a3cf-2a5bb26fec93-kube-api-access-s2ttb\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.578454 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ef03e2-9526-4184-a3cf-2a5bb26fec93-config\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.578526 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ef03e2-9526-4184-a3cf-2a5bb26fec93-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.578549 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.578598 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75ef03e2-9526-4184-a3cf-2a5bb26fec93-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.578628 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75ef03e2-9526-4184-a3cf-2a5bb26fec93-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.579008 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.579179 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75ef03e2-9526-4184-a3cf-2a5bb26fec93-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.579893 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75ef03e2-9526-4184-a3cf-2a5bb26fec93-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.580164 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ef03e2-9526-4184-a3cf-2a5bb26fec93-config\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.580631 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75ef03e2-9526-4184-a3cf-2a5bb26fec93-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.587517 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75ef03e2-9526-4184-a3cf-2a5bb26fec93-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.587558 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75ef03e2-9526-4184-a3cf-2a5bb26fec93-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.590019 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ef03e2-9526-4184-a3cf-2a5bb26fec93-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.601395 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2ttb\" (UniqueName: \"kubernetes.io/projected/75ef03e2-9526-4184-a3cf-2a5bb26fec93-kube-api-access-s2ttb\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.605584 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"75ef03e2-9526-4184-a3cf-2a5bb26fec93\") " pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:38 crc kubenswrapper[4933]: I1201 09:49:38.615120 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 01 09:49:47 crc kubenswrapper[4933]: E1201 09:49:47.710699 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 01 09:49:47 crc kubenswrapper[4933]: E1201 09:49:47.712039 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmcdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(3d9a36ba-b2c3-4f85-96d6-608d8e9749ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:49:47 crc kubenswrapper[4933]: E1201 09:49:47.713281 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" Dec 01 09:49:47 crc kubenswrapper[4933]: E1201 09:49:47.714130 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 01 09:49:47 crc kubenswrapper[4933]: E1201 09:49:47.714369 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79bgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(b8f90456-f375-447c-8f32-8ca629a28861): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:49:47 crc kubenswrapper[4933]: E1201 09:49:47.715601 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="b8f90456-f375-447c-8f32-8ca629a28861" Dec 01 09:49:48 crc kubenswrapper[4933]: E1201 09:49:48.570440 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" Dec 01 09:49:48 crc kubenswrapper[4933]: E1201 09:49:48.571108 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="b8f90456-f375-447c-8f32-8ca629a28861" Dec 01 09:49:52 crc kubenswrapper[4933]: I1201 09:49:52.551635 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.294795 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.295506 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxxmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-xcc8g_openstack(f7a0d9ea-050c-4472-abc2-709c1052a058): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.295128 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.295948 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqbdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-qz445_openstack(01a1a54d-3a8e-4dfd-aed1-d904670bce61): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.297083 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" podUID="01a1a54d-3a8e-4dfd-aed1-d904670bce61" Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.297102 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-xcc8g" podUID="f7a0d9ea-050c-4472-abc2-709c1052a058" Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.321339 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.321519 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89ghp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-2z4vq_openstack(cb7463c3-8d88-4334-8627-a4f62371faf8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.322603 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" podUID="cb7463c3-8d88-4334-8627-a4f62371faf8" Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.333431 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.333610 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkxg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-crjgc_openstack(98d9a2ca-cef0-4de2-9067-93754ed34ba3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.334761 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" podUID="98d9a2ca-cef0-4de2-9067-93754ed34ba3" Dec 01 09:49:55 crc kubenswrapper[4933]: I1201 09:49:55.513319 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 01 09:49:55 crc kubenswrapper[4933]: I1201 09:49:55.627341 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1ba27bfa-74d8-4df5-8217-666a02132516","Type":"ContainerStarted","Data":"ae5e84ee0b6ac92b91621eec820dd838c9aee09f7068bda81b2c66c9215357d2"} Dec 01 09:49:55 crc kubenswrapper[4933]: I1201 09:49:55.631042 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fa3b7950-1309-47f9-9372-7932d0ef0ced","Type":"ContainerStarted","Data":"87f805f5ceaeac0a11b11ee83ebd6ac11aba7586358a076a02ca2dedd9f9d774"} Dec 01 09:49:55 crc kubenswrapper[4933]: I1201 09:49:55.634090 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"12be8eec-c6b1-4606-83de-e19ac2ab17eb","Type":"ContainerStarted","Data":"aaa045311f08138fae26d74a13f0774723f3bb4cf5f2a9380b5e30e53569f5cb"} Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.636694 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" podUID="01a1a54d-3a8e-4dfd-aed1-d904670bce61" Dec 01 09:49:55 crc kubenswrapper[4933]: E1201 09:49:55.636691 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" podUID="cb7463c3-8d88-4334-8627-a4f62371faf8" Dec 01 09:49:55 crc kubenswrapper[4933]: I1201 09:49:55.994658 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 01 09:49:56 crc kubenswrapper[4933]: W1201 09:49:56.003366 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75ef03e2_9526_4184_a3cf_2a5bb26fec93.slice/crio-d2b9cc08ace3b65bed702e6364bab9028d5028e939a9fa925f2c7e2b107e11de WatchSource:0}: Error finding container d2b9cc08ace3b65bed702e6364bab9028d5028e939a9fa925f2c7e2b107e11de: Status 404 returned error can't find the container with id d2b9cc08ace3b65bed702e6364bab9028d5028e939a9fa925f2c7e2b107e11de Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.024162 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xcc8g" Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.113114 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxxmv\" (UniqueName: \"kubernetes.io/projected/f7a0d9ea-050c-4472-abc2-709c1052a058-kube-api-access-sxxmv\") pod \"f7a0d9ea-050c-4472-abc2-709c1052a058\" (UID: \"f7a0d9ea-050c-4472-abc2-709c1052a058\") " Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.113352 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7a0d9ea-050c-4472-abc2-709c1052a058-config\") pod \"f7a0d9ea-050c-4472-abc2-709c1052a058\" (UID: \"f7a0d9ea-050c-4472-abc2-709c1052a058\") " Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.113857 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a0d9ea-050c-4472-abc2-709c1052a058-config" (OuterVolumeSpecName: "config") pod "f7a0d9ea-050c-4472-abc2-709c1052a058" (UID: "f7a0d9ea-050c-4472-abc2-709c1052a058"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.123630 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a0d9ea-050c-4472-abc2-709c1052a058-kube-api-access-sxxmv" (OuterVolumeSpecName: "kube-api-access-sxxmv") pod "f7a0d9ea-050c-4472-abc2-709c1052a058" (UID: "f7a0d9ea-050c-4472-abc2-709c1052a058"). InnerVolumeSpecName "kube-api-access-sxxmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.172071 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.179724 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" Dec 01 09:49:56 crc kubenswrapper[4933]: W1201 09:49:56.181868 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55a605ff_7d52_4d80_bd32_6301d0c696c1.slice/crio-54c420a7e1c1b598d96b8a6a6347862a47195417c1d6a129224512949f2dd33c WatchSource:0}: Error finding container 54c420a7e1c1b598d96b8a6a6347862a47195417c1d6a129224512949f2dd33c: Status 404 returned error can't find the container with id 54c420a7e1c1b598d96b8a6a6347862a47195417c1d6a129224512949f2dd33c Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.221268 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7a0d9ea-050c-4472-abc2-709c1052a058-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.221645 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxxmv\" (UniqueName: \"kubernetes.io/projected/f7a0d9ea-050c-4472-abc2-709c1052a058-kube-api-access-sxxmv\") on node \"crc\" DevicePath \"\"" Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.231625 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5tgrr"] Dec 01 09:49:56 crc kubenswrapper[4933]: W1201 09:49:56.240843 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1f2d08e_94f8_47ec_9e7e_a4722b71b609.slice/crio-bb89ae50b3d6ade92b721dc22cfa36e21374f444eadac9af22c7128b68a97bd8 WatchSource:0}: Error finding container bb89ae50b3d6ade92b721dc22cfa36e21374f444eadac9af22c7128b68a97bd8: Status 404 returned error can't find the container with id bb89ae50b3d6ade92b721dc22cfa36e21374f444eadac9af22c7128b68a97bd8 Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.245498 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 09:49:56 crc kubenswrapper[4933]: W1201 09:49:56.249899 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7de84cc_bb1a_45ba_bbba_acc140d0facc.slice/crio-d5a696635a1f8cc07589a917e60ad35dca0dedbacefd5a370eed41947569a067 WatchSource:0}: Error finding container d5a696635a1f8cc07589a917e60ad35dca0dedbacefd5a370eed41947569a067: Status 404 returned error can't find the container with id d5a696635a1f8cc07589a917e60ad35dca0dedbacefd5a370eed41947569a067 Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.322923 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98d9a2ca-cef0-4de2-9067-93754ed34ba3-dns-svc\") pod \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\" (UID: \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\") " Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.323060 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkxg4\" (UniqueName: \"kubernetes.io/projected/98d9a2ca-cef0-4de2-9067-93754ed34ba3-kube-api-access-fkxg4\") pod \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\" (UID: \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\") " Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.323098 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d9a2ca-cef0-4de2-9067-93754ed34ba3-config\") pod \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\" (UID: \"98d9a2ca-cef0-4de2-9067-93754ed34ba3\") " Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.324517 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d9a2ca-cef0-4de2-9067-93754ed34ba3-config" (OuterVolumeSpecName: "config") pod "98d9a2ca-cef0-4de2-9067-93754ed34ba3" (UID: "98d9a2ca-cef0-4de2-9067-93754ed34ba3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.324938 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d9a2ca-cef0-4de2-9067-93754ed34ba3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98d9a2ca-cef0-4de2-9067-93754ed34ba3" (UID: "98d9a2ca-cef0-4de2-9067-93754ed34ba3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.333588 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d9a2ca-cef0-4de2-9067-93754ed34ba3-kube-api-access-fkxg4" (OuterVolumeSpecName: "kube-api-access-fkxg4") pod "98d9a2ca-cef0-4de2-9067-93754ed34ba3" (UID: "98d9a2ca-cef0-4de2-9067-93754ed34ba3"). InnerVolumeSpecName "kube-api-access-fkxg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.425281 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98d9a2ca-cef0-4de2-9067-93754ed34ba3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.425337 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkxg4\" (UniqueName: \"kubernetes.io/projected/98d9a2ca-cef0-4de2-9067-93754ed34ba3-kube-api-access-fkxg4\") on node \"crc\" DevicePath \"\"" Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.425354 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d9a2ca-cef0-4de2-9067-93754ed34ba3-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.642289 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xcc8g" event={"ID":"f7a0d9ea-050c-4472-abc2-709c1052a058","Type":"ContainerDied","Data":"ecd5adb70b8660ed1fb20cdd126e183c0072e8d30fa65845944e5cd5578b8657"} Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.642428 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xcc8g" Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.644799 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c7de84cc-bb1a-45ba-bbba-acc140d0facc","Type":"ContainerStarted","Data":"d5a696635a1f8cc07589a917e60ad35dca0dedbacefd5a370eed41947569a067"} Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.646775 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"55a605ff-7d52-4d80-bd32-6301d0c696c1","Type":"ContainerStarted","Data":"72cd727ce70704868792329cdf5d9b9d17eca07c7eb44828c7892c75549830a0"} Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.646816 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"55a605ff-7d52-4d80-bd32-6301d0c696c1","Type":"ContainerStarted","Data":"54c420a7e1c1b598d96b8a6a6347862a47195417c1d6a129224512949f2dd33c"} Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.647704 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.647695 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-crjgc" event={"ID":"98d9a2ca-cef0-4de2-9067-93754ed34ba3","Type":"ContainerDied","Data":"08bd0724879652a1a021a06316920facf9acf670cbc38752deb4b7352e448f16"} Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.648807 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"75ef03e2-9526-4184-a3cf-2a5bb26fec93","Type":"ContainerStarted","Data":"d2b9cc08ace3b65bed702e6364bab9028d5028e939a9fa925f2c7e2b107e11de"} Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.649820 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5tgrr" event={"ID":"a1f2d08e-94f8-47ec-9e7e-a4722b71b609","Type":"ContainerStarted","Data":"bb89ae50b3d6ade92b721dc22cfa36e21374f444eadac9af22c7128b68a97bd8"} Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.701641 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xcc8g"] Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.709645 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xcc8g"] Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.737600 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-crjgc"] Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.744117 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-crjgc"] Dec 01 09:49:56 crc kubenswrapper[4933]: I1201 09:49:56.873497 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-l8bgh"] Dec 01 09:49:56 crc kubenswrapper[4933]: W1201 09:49:56.878688 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90984aaa_e287_4038_bc14_16debb186a8d.slice/crio-094ac113a839d401804bbba3cfd84c05a3d5e15c83c605669df48770b764438e WatchSource:0}: Error finding container 094ac113a839d401804bbba3cfd84c05a3d5e15c83c605669df48770b764438e: Status 404 returned error can't find the container with id 094ac113a839d401804bbba3cfd84c05a3d5e15c83c605669df48770b764438e Dec 01 09:49:57 crc kubenswrapper[4933]: I1201 09:49:57.676621 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d9a2ca-cef0-4de2-9067-93754ed34ba3" path="/var/lib/kubelet/pods/98d9a2ca-cef0-4de2-9067-93754ed34ba3/volumes" Dec 01 09:49:57 crc kubenswrapper[4933]: I1201 09:49:57.677003 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a0d9ea-050c-4472-abc2-709c1052a058" path="/var/lib/kubelet/pods/f7a0d9ea-050c-4472-abc2-709c1052a058/volumes" Dec 01 09:49:57 crc kubenswrapper[4933]: I1201 09:49:57.677345 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l8bgh" event={"ID":"90984aaa-e287-4038-bc14-16debb186a8d","Type":"ContainerStarted","Data":"094ac113a839d401804bbba3cfd84c05a3d5e15c83c605669df48770b764438e"} Dec 01 09:50:00 crc kubenswrapper[4933]: I1201 09:50:00.705725 4933 generic.go:334] "Generic (PLEG): container finished" podID="1ba27bfa-74d8-4df5-8217-666a02132516" containerID="ae5e84ee0b6ac92b91621eec820dd838c9aee09f7068bda81b2c66c9215357d2" exitCode=0 Dec 01 09:50:00 crc kubenswrapper[4933]: I1201 09:50:00.705800 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1ba27bfa-74d8-4df5-8217-666a02132516","Type":"ContainerDied","Data":"ae5e84ee0b6ac92b91621eec820dd838c9aee09f7068bda81b2c66c9215357d2"} Dec 01 09:50:01 crc kubenswrapper[4933]: I1201 09:50:01.718336 4933 generic.go:334] "Generic (PLEG): container finished" podID="55a605ff-7d52-4d80-bd32-6301d0c696c1" containerID="72cd727ce70704868792329cdf5d9b9d17eca07c7eb44828c7892c75549830a0" exitCode=0 Dec 01 09:50:01 crc kubenswrapper[4933]: I1201 09:50:01.718408 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"55a605ff-7d52-4d80-bd32-6301d0c696c1","Type":"ContainerDied","Data":"72cd727ce70704868792329cdf5d9b9d17eca07c7eb44828c7892c75549830a0"} Dec 01 09:50:01 crc kubenswrapper[4933]: I1201 09:50:01.721898 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"12be8eec-c6b1-4606-83de-e19ac2ab17eb","Type":"ContainerStarted","Data":"4572f4307cd86d613e2f00ae3631d2ac386eba9a9b07f80e6307020c3fb516f4"} Dec 01 09:50:01 crc kubenswrapper[4933]: I1201 09:50:01.722005 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 01 09:50:01 crc kubenswrapper[4933]: I1201 09:50:01.724225 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l8bgh" event={"ID":"90984aaa-e287-4038-bc14-16debb186a8d","Type":"ContainerStarted","Data":"02c8ae33914d1d4777124541cfc50bb57206e40313ea00005ead20733d500dec"} Dec 01 09:50:01 crc kubenswrapper[4933]: I1201 09:50:01.768091 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=27.915887258 podStartE2EDuration="32.768064681s" podCreationTimestamp="2025-12-01 09:49:29 +0000 UTC" firstStartedPulling="2025-12-01 09:49:55.580446884 +0000 UTC m=+1086.222170499" lastFinishedPulling="2025-12-01 09:50:00.432624317 +0000 UTC m=+1091.074347922" observedRunningTime="2025-12-01 09:50:01.762833664 +0000 UTC m=+1092.404557289" watchObservedRunningTime="2025-12-01 09:50:01.768064681 +0000 UTC m=+1092.409788296" Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.736273 4933 generic.go:334] "Generic (PLEG): container finished" podID="90984aaa-e287-4038-bc14-16debb186a8d" containerID="02c8ae33914d1d4777124541cfc50bb57206e40313ea00005ead20733d500dec" exitCode=0 Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.736428 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l8bgh" event={"ID":"90984aaa-e287-4038-bc14-16debb186a8d","Type":"ContainerDied","Data":"02c8ae33914d1d4777124541cfc50bb57206e40313ea00005ead20733d500dec"} Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.738916 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c7de84cc-bb1a-45ba-bbba-acc140d0facc","Type":"ContainerStarted","Data":"af94f350651a42ea0900d9ce74a1f4c564b641f5865ec051136098b1db9a214a"} Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.739034 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.742684 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1ba27bfa-74d8-4df5-8217-666a02132516","Type":"ContainerStarted","Data":"158412d71b7d7de5c399c232034851249afe07389558e122b58eb04cb013cd42"} Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.746621 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"55a605ff-7d52-4d80-bd32-6301d0c696c1","Type":"ContainerStarted","Data":"0d659c0c44dc3b9bf5df2ec14ed7692e4abc4ea2bebb5e33a5a9bdd020bab4c5"} Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.748776 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"75ef03e2-9526-4184-a3cf-2a5bb26fec93","Type":"ContainerStarted","Data":"9d44a0b58f7aef047a1f7a3d2eae604512f3c06f22e0f0bd017181de46b16521"} Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.750360 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5tgrr" event={"ID":"a1f2d08e-94f8-47ec-9e7e-a4722b71b609","Type":"ContainerStarted","Data":"dd69fe911467fda7fee44f0c8760cb5487b15f8532ee3a0c6f09e3aef9b95fd8"} Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.750538 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-5tgrr" Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.752934 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fa3b7950-1309-47f9-9372-7932d0ef0ced","Type":"ContainerStarted","Data":"ccb8ba7e96611a2308d70b1b35c25f2f36d64e1e7dc373b6727673f92d7be24c"} Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.784003 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5tgrr" podStartSLOduration=24.37951743 podStartE2EDuration="28.78398012s" podCreationTimestamp="2025-12-01 09:49:34 +0000 UTC" firstStartedPulling="2025-12-01 09:49:56.243526142 +0000 UTC m=+1086.885249757" lastFinishedPulling="2025-12-01 09:50:00.647988832 +0000 UTC m=+1091.289712447" observedRunningTime="2025-12-01 09:50:02.782609676 +0000 UTC m=+1093.424333301" watchObservedRunningTime="2025-12-01 09:50:02.78398012 +0000 UTC m=+1093.425703735" Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.806499 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=35.806479391 podStartE2EDuration="35.806479391s" podCreationTimestamp="2025-12-01 09:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:50:02.80276858 +0000 UTC m=+1093.444492205" watchObservedRunningTime="2025-12-01 09:50:02.806479391 +0000 UTC m=+1093.448203006" Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.826877 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=25.944277322 podStartE2EDuration="31.826854239s" podCreationTimestamp="2025-12-01 09:49:31 +0000 UTC" firstStartedPulling="2025-12-01 09:49:56.253206749 +0000 UTC m=+1086.894930364" lastFinishedPulling="2025-12-01 09:50:02.135783666 +0000 UTC m=+1092.777507281" observedRunningTime="2025-12-01 09:50:02.819812737 +0000 UTC m=+1093.461536362" watchObservedRunningTime="2025-12-01 09:50:02.826854239 +0000 UTC m=+1093.468577854" Dec 01 09:50:02 crc kubenswrapper[4933]: I1201 09:50:02.851771 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.263762833 podStartE2EDuration="36.851750019s" podCreationTimestamp="2025-12-01 09:49:26 +0000 UTC" firstStartedPulling="2025-12-01 09:49:28.725241324 +0000 UTC m=+1059.366964939" lastFinishedPulling="2025-12-01 09:49:55.31322851 +0000 UTC m=+1085.954952125" observedRunningTime="2025-12-01 09:50:02.844072462 +0000 UTC m=+1093.485796097" watchObservedRunningTime="2025-12-01 09:50:02.851750019 +0000 UTC m=+1093.493473624" Dec 01 09:50:03 crc kubenswrapper[4933]: I1201 09:50:03.767791 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l8bgh" event={"ID":"90984aaa-e287-4038-bc14-16debb186a8d","Type":"ContainerStarted","Data":"ba4d38dc39857acfe75944d82e91e0c0849b4ab38723f8ed707cfd2283e4cca0"} Dec 01 09:50:03 crc kubenswrapper[4933]: I1201 09:50:03.768136 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:50:03 crc kubenswrapper[4933]: I1201 09:50:03.768150 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l8bgh" event={"ID":"90984aaa-e287-4038-bc14-16debb186a8d","Type":"ContainerStarted","Data":"63a9c13126ac31806a9b0382e178f60255b86bb87ccd4c8e8149d98ffe1109f4"} Dec 01 09:50:03 crc kubenswrapper[4933]: I1201 09:50:03.768162 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:50:03 crc kubenswrapper[4933]: I1201 09:50:03.771775 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec","Type":"ContainerStarted","Data":"50e61c5cd567cfe70fd9d90579b11db9d8c588d75c47667676368152554b647e"} Dec 01 09:50:03 crc kubenswrapper[4933]: I1201 09:50:03.790537 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-l8bgh" podStartSLOduration=26.024669258 podStartE2EDuration="29.790520129s" podCreationTimestamp="2025-12-01 09:49:34 +0000 UTC" firstStartedPulling="2025-12-01 09:49:56.881537606 +0000 UTC m=+1087.523261221" lastFinishedPulling="2025-12-01 09:50:00.647388477 +0000 UTC m=+1091.289112092" observedRunningTime="2025-12-01 09:50:03.788364726 +0000 UTC m=+1094.430088341" watchObservedRunningTime="2025-12-01 09:50:03.790520129 +0000 UTC m=+1094.432243744" Dec 01 09:50:05 crc kubenswrapper[4933]: I1201 09:50:05.789824 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"75ef03e2-9526-4184-a3cf-2a5bb26fec93","Type":"ContainerStarted","Data":"140ff3211f39d9e4f2d09ff76a6c8631344932532409ca09a5332d0443806911"} Dec 01 09:50:05 crc kubenswrapper[4933]: I1201 09:50:05.793048 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fa3b7950-1309-47f9-9372-7932d0ef0ced","Type":"ContainerStarted","Data":"098d05f61378fa087261c6864614e43dcad83a98e46fb3d792a6ae11613579a9"} Dec 01 09:50:05 crc kubenswrapper[4933]: I1201 09:50:05.820669 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.49537903 podStartE2EDuration="28.820642964s" podCreationTimestamp="2025-12-01 09:49:37 +0000 UTC" firstStartedPulling="2025-12-01 09:49:56.007206475 +0000 UTC m=+1086.648930090" lastFinishedPulling="2025-12-01 09:50:05.332470409 +0000 UTC m=+1095.974194024" observedRunningTime="2025-12-01 09:50:05.810528916 +0000 UTC m=+1096.452252541" watchObservedRunningTime="2025-12-01 09:50:05.820642964 +0000 UTC m=+1096.462366579" Dec 01 09:50:05 crc kubenswrapper[4933]: I1201 09:50:05.835219 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.761760484 podStartE2EDuration="32.83519721s" podCreationTimestamp="2025-12-01 09:49:33 +0000 UTC" firstStartedPulling="2025-12-01 09:49:55.284095057 +0000 UTC m=+1085.925818672" lastFinishedPulling="2025-12-01 09:50:05.357531783 +0000 UTC m=+1095.999255398" observedRunningTime="2025-12-01 09:50:05.830148986 +0000 UTC m=+1096.471872591" watchObservedRunningTime="2025-12-01 09:50:05.83519721 +0000 UTC m=+1096.476920825" Dec 01 09:50:06 crc kubenswrapper[4933]: I1201 09:50:06.802277 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8f90456-f375-447c-8f32-8ca629a28861","Type":"ContainerStarted","Data":"649eb745891b3ba68ed59fafd553564f944a61857c2db3028ded94f18160e91a"} Dec 01 09:50:07 crc kubenswrapper[4933]: I1201 09:50:07.813334 4933 generic.go:334] "Generic (PLEG): container finished" podID="01a1a54d-3a8e-4dfd-aed1-d904670bce61" containerID="9fcbc98d727a0c65b51905eb06ecb8fda5c5d600f3a38f4c83ccc6d8f93935c1" exitCode=0 Dec 01 09:50:07 crc kubenswrapper[4933]: I1201 09:50:07.813660 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" event={"ID":"01a1a54d-3a8e-4dfd-aed1-d904670bce61","Type":"ContainerDied","Data":"9fcbc98d727a0c65b51905eb06ecb8fda5c5d600f3a38f4c83ccc6d8f93935c1"} Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.088476 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.088540 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.108568 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.153898 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.204260 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.615491 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.615562 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.657033 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.827368 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" event={"ID":"01a1a54d-3a8e-4dfd-aed1-d904670bce61","Type":"ContainerStarted","Data":"1b96d933c47c5ae5c8c313dd7d9e4e4fdf5b45ccd7390a73f65a91152a6d3ae6"} Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.827914 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.828810 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.869918 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" podStartSLOduration=3.462707254 podStartE2EDuration="44.869892496s" podCreationTimestamp="2025-12-01 09:49:24 +0000 UTC" firstStartedPulling="2025-12-01 09:49:25.825196294 +0000 UTC m=+1056.466919909" lastFinishedPulling="2025-12-01 09:50:07.232381536 +0000 UTC m=+1097.874105151" observedRunningTime="2025-12-01 09:50:08.867827296 +0000 UTC m=+1099.509550911" watchObservedRunningTime="2025-12-01 09:50:08.869892496 +0000 UTC m=+1099.511616111" Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.873488 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.882999 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 01 09:50:08 crc kubenswrapper[4933]: I1201 09:50:08.922648 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.268872 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qz445"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.313397 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nrmcm"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.315702 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.318720 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.334540 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nrmcm"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.389118 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-qxwjl"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.390920 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.397551 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.431220 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qxwjl"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.488403 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6cf5-account-create-update-shwg2"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.489201 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-ovn-rundir\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.489282 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-config\") pod \"dnsmasq-dns-7fd796d7df-nrmcm\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.495669 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cf5-account-create-update-shwg2" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.503811 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.513802 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-nrmcm\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.513960 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.514011 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-config\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.514044 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-nrmcm\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.514121 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-combined-ca-bundle\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.514359 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5md4\" (UniqueName: \"kubernetes.io/projected/a128d87d-63cd-4f75-8f47-5bb700c496f3-kube-api-access-k5md4\") pod \"dnsmasq-dns-7fd796d7df-nrmcm\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.514428 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-ovs-rundir\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.514559 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmv4q\" (UniqueName: \"kubernetes.io/projected/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-kube-api-access-cmv4q\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.567822 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.567875 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.572244 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cf5-account-create-update-shwg2"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.596513 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2z4vq"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.620136 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-config\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.620208 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-nrmcm\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.620253 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-combined-ca-bundle\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.620354 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrnnm\" (UniqueName: \"kubernetes.io/projected/49de438f-c2ca-4d52-a9ca-47fb8ef7ec81-kube-api-access-xrnnm\") pod \"keystone-6cf5-account-create-update-shwg2\" (UID: \"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81\") " pod="openstack/keystone-6cf5-account-create-update-shwg2" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.620396 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5md4\" (UniqueName: \"kubernetes.io/projected/a128d87d-63cd-4f75-8f47-5bb700c496f3-kube-api-access-k5md4\") pod \"dnsmasq-dns-7fd796d7df-nrmcm\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.620449 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-ovs-rundir\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.620481 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49de438f-c2ca-4d52-a9ca-47fb8ef7ec81-operator-scripts\") pod \"keystone-6cf5-account-create-update-shwg2\" (UID: \"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81\") " pod="openstack/keystone-6cf5-account-create-update-shwg2" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.620527 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmv4q\" (UniqueName: \"kubernetes.io/projected/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-kube-api-access-cmv4q\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.620557 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-ovn-rundir\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.620584 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-config\") pod \"dnsmasq-dns-7fd796d7df-nrmcm\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.620637 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-nrmcm\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.620670 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.622242 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-ovn-rundir\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.623434 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-config\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.624265 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-config\") pod \"dnsmasq-dns-7fd796d7df-nrmcm\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.624893 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-nrmcm\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.625514 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-nrmcm\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.625975 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-ovs-rundir\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.640960 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-combined-ca-bundle\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.644549 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5md4\" (UniqueName: \"kubernetes.io/projected/a128d87d-63cd-4f75-8f47-5bb700c496f3-kube-api-access-k5md4\") pod \"dnsmasq-dns-7fd796d7df-nrmcm\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.649559 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fd457"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.651213 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.652256 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.660658 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.663854 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fd457"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.666282 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmv4q\" (UniqueName: \"kubernetes.io/projected/1db280e5-ecd7-44cf-933a-2d55ba6f7b42-kube-api-access-cmv4q\") pod \"ovn-controller-metrics-qxwjl\" (UID: \"1db280e5-ecd7-44cf-933a-2d55ba6f7b42\") " pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.721997 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrnnm\" (UniqueName: \"kubernetes.io/projected/49de438f-c2ca-4d52-a9ca-47fb8ef7ec81-kube-api-access-xrnnm\") pod \"keystone-6cf5-account-create-update-shwg2\" (UID: \"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81\") " pod="openstack/keystone-6cf5-account-create-update-shwg2" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.722075 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49de438f-c2ca-4d52-a9ca-47fb8ef7ec81-operator-scripts\") pod \"keystone-6cf5-account-create-update-shwg2\" (UID: \"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81\") " pod="openstack/keystone-6cf5-account-create-update-shwg2" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.723800 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49de438f-c2ca-4d52-a9ca-47fb8ef7ec81-operator-scripts\") pod \"keystone-6cf5-account-create-update-shwg2\" (UID: \"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81\") " pod="openstack/keystone-6cf5-account-create-update-shwg2" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.727235 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.729059 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5xz5w"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.730214 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5xz5w"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.730278 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.730320 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5xz5w" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.732143 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.733019 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.735774 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.736185 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-km54t" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.736119 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.736816 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qxwjl" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.737563 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.743828 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrnnm\" (UniqueName: \"kubernetes.io/projected/49de438f-c2ca-4d52-a9ca-47fb8ef7ec81-kube-api-access-xrnnm\") pod \"keystone-6cf5-account-create-update-shwg2\" (UID: \"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81\") " pod="openstack/keystone-6cf5-account-create-update-shwg2" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.823786 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dc35964-1186-483a-8904-c98af6497c53-scripts\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.823883 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.823923 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc35964-1186-483a-8904-c98af6497c53-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.823983 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.824007 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.824034 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzr4c\" (UniqueName: \"kubernetes.io/projected/61e85825-be78-4eba-9c52-3649968d0390-kube-api-access-nzr4c\") pod \"placement-db-create-5xz5w\" (UID: \"61e85825-be78-4eba-9c52-3649968d0390\") " pod="openstack/placement-db-create-5xz5w" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.824061 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5gbh\" (UniqueName: \"kubernetes.io/projected/9dc35964-1186-483a-8904-c98af6497c53-kube-api-access-h5gbh\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.824088 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc35964-1186-483a-8904-c98af6497c53-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.824157 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61e85825-be78-4eba-9c52-3649968d0390-operator-scripts\") pod \"placement-db-create-5xz5w\" (UID: \"61e85825-be78-4eba-9c52-3649968d0390\") " pod="openstack/placement-db-create-5xz5w" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.824212 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrl4p\" (UniqueName: \"kubernetes.io/projected/51b55cac-ddde-4a68-a081-d3e34e4b39fc-kube-api-access-hrl4p\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.824241 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-config\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.824284 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc35964-1186-483a-8904-c98af6497c53-config\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.824964 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9dc35964-1186-483a-8904-c98af6497c53-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.825031 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc35964-1186-483a-8904-c98af6497c53-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.835080 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-0fd3-account-create-update-cjld8"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.842114 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cf5-account-create-update-shwg2" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.845869 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0fd3-account-create-update-cjld8" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.848333 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.912369 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0fd3-account-create-update-cjld8"] Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.922711 4933 generic.go:334] "Generic (PLEG): container finished" podID="cb7463c3-8d88-4334-8627-a4f62371faf8" containerID="e68703bf27cbf2e6cbb613e44a4c6b5285d985ca68a617b2c17ab0d9dd077660" exitCode=0 Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.922815 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" event={"ID":"cb7463c3-8d88-4334-8627-a4f62371faf8","Type":"ContainerDied","Data":"e68703bf27cbf2e6cbb613e44a4c6b5285d985ca68a617b2c17ab0d9dd077660"} Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.938977 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.939245 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346-operator-scripts\") pod \"placement-0fd3-account-create-update-cjld8\" (UID: \"5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346\") " pod="openstack/placement-0fd3-account-create-update-cjld8" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.939348 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.939385 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc35964-1186-483a-8904-c98af6497c53-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.939454 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.939482 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.939507 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzr4c\" (UniqueName: \"kubernetes.io/projected/61e85825-be78-4eba-9c52-3649968d0390-kube-api-access-nzr4c\") pod \"placement-db-create-5xz5w\" (UID: \"61e85825-be78-4eba-9c52-3649968d0390\") " pod="openstack/placement-db-create-5xz5w" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.939538 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5gbh\" (UniqueName: \"kubernetes.io/projected/9dc35964-1186-483a-8904-c98af6497c53-kube-api-access-h5gbh\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.939587 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc35964-1186-483a-8904-c98af6497c53-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.939682 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61e85825-be78-4eba-9c52-3649968d0390-operator-scripts\") pod \"placement-db-create-5xz5w\" (UID: \"61e85825-be78-4eba-9c52-3649968d0390\") " pod="openstack/placement-db-create-5xz5w" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.939739 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c69wp\" (UniqueName: \"kubernetes.io/projected/5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346-kube-api-access-c69wp\") pod \"placement-0fd3-account-create-update-cjld8\" (UID: \"5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346\") " pod="openstack/placement-0fd3-account-create-update-cjld8" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.939813 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrl4p\" (UniqueName: \"kubernetes.io/projected/51b55cac-ddde-4a68-a081-d3e34e4b39fc-kube-api-access-hrl4p\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.939852 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-config\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.939928 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc35964-1186-483a-8904-c98af6497c53-config\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.940025 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9dc35964-1186-483a-8904-c98af6497c53-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.940081 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc35964-1186-483a-8904-c98af6497c53-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.940270 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dc35964-1186-483a-8904-c98af6497c53-scripts\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.941296 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dc35964-1186-483a-8904-c98af6497c53-scripts\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.941799 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.943325 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61e85825-be78-4eba-9c52-3649968d0390-operator-scripts\") pod \"placement-db-create-5xz5w\" (UID: \"61e85825-be78-4eba-9c52-3649968d0390\") " pod="openstack/placement-db-create-5xz5w" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.943453 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-config\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.943475 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.943752 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc35964-1186-483a-8904-c98af6497c53-config\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.943743 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.947274 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc35964-1186-483a-8904-c98af6497c53-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.948363 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc35964-1186-483a-8904-c98af6497c53-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.950132 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc35964-1186-483a-8904-c98af6497c53-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.953503 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9dc35964-1186-483a-8904-c98af6497c53-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.957664 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.976397 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzr4c\" (UniqueName: \"kubernetes.io/projected/61e85825-be78-4eba-9c52-3649968d0390-kube-api-access-nzr4c\") pod \"placement-db-create-5xz5w\" (UID: \"61e85825-be78-4eba-9c52-3649968d0390\") " pod="openstack/placement-db-create-5xz5w" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.979575 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5gbh\" (UniqueName: \"kubernetes.io/projected/9dc35964-1186-483a-8904-c98af6497c53-kube-api-access-h5gbh\") pod \"ovn-northd-0\" (UID: \"9dc35964-1186-483a-8904-c98af6497c53\") " pod="openstack/ovn-northd-0" Dec 01 09:50:09 crc kubenswrapper[4933]: I1201 09:50:09.997029 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrl4p\" (UniqueName: \"kubernetes.io/projected/51b55cac-ddde-4a68-a081-d3e34e4b39fc-kube-api-access-hrl4p\") pod \"dnsmasq-dns-86db49b7ff-fd457\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.028273 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-25mh8"] Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.031410 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-25mh8" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.039600 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-25mh8"] Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.041880 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346-operator-scripts\") pod \"placement-0fd3-account-create-update-cjld8\" (UID: \"5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346\") " pod="openstack/placement-0fd3-account-create-update-cjld8" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.041978 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c69wp\" (UniqueName: \"kubernetes.io/projected/5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346-kube-api-access-c69wp\") pod \"placement-0fd3-account-create-update-cjld8\" (UID: \"5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346\") " pod="openstack/placement-0fd3-account-create-update-cjld8" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.044949 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346-operator-scripts\") pod \"placement-0fd3-account-create-update-cjld8\" (UID: \"5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346\") " pod="openstack/placement-0fd3-account-create-update-cjld8" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.092026 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c69wp\" (UniqueName: \"kubernetes.io/projected/5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346-kube-api-access-c69wp\") pod \"placement-0fd3-account-create-update-cjld8\" (UID: \"5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346\") " pod="openstack/placement-0fd3-account-create-update-cjld8" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.129963 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7ef6-account-create-update-g9stz"] Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.131922 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7ef6-account-create-update-g9stz" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.139859 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.147791 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7ef6-account-create-update-g9stz"] Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.160552 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.199552 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5xz5w" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.209005 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.224912 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.245001 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0fd3-account-create-update-cjld8" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.249342 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p5bx\" (UniqueName: \"kubernetes.io/projected/031657e9-4699-435e-a8e2-0e2442a10dd0-kube-api-access-5p5bx\") pod \"glance-db-create-25mh8\" (UID: \"031657e9-4699-435e-a8e2-0e2442a10dd0\") " pod="openstack/glance-db-create-25mh8" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.249603 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81eeda37-3cd9-4518-8561-9b414ec377e7-operator-scripts\") pod \"glance-7ef6-account-create-update-g9stz\" (UID: \"81eeda37-3cd9-4518-8561-9b414ec377e7\") " pod="openstack/glance-7ef6-account-create-update-g9stz" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.249647 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/031657e9-4699-435e-a8e2-0e2442a10dd0-operator-scripts\") pod \"glance-db-create-25mh8\" (UID: \"031657e9-4699-435e-a8e2-0e2442a10dd0\") " pod="openstack/glance-db-create-25mh8" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.249723 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsvlz\" (UniqueName: \"kubernetes.io/projected/81eeda37-3cd9-4518-8561-9b414ec377e7-kube-api-access-fsvlz\") pod \"glance-7ef6-account-create-update-g9stz\" (UID: \"81eeda37-3cd9-4518-8561-9b414ec377e7\") " pod="openstack/glance-7ef6-account-create-update-g9stz" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.353535 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/031657e9-4699-435e-a8e2-0e2442a10dd0-operator-scripts\") pod \"glance-db-create-25mh8\" (UID: \"031657e9-4699-435e-a8e2-0e2442a10dd0\") " pod="openstack/glance-db-create-25mh8" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.353634 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsvlz\" (UniqueName: \"kubernetes.io/projected/81eeda37-3cd9-4518-8561-9b414ec377e7-kube-api-access-fsvlz\") pod \"glance-7ef6-account-create-update-g9stz\" (UID: \"81eeda37-3cd9-4518-8561-9b414ec377e7\") " pod="openstack/glance-7ef6-account-create-update-g9stz" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.353680 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p5bx\" (UniqueName: \"kubernetes.io/projected/031657e9-4699-435e-a8e2-0e2442a10dd0-kube-api-access-5p5bx\") pod \"glance-db-create-25mh8\" (UID: \"031657e9-4699-435e-a8e2-0e2442a10dd0\") " pod="openstack/glance-db-create-25mh8" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.353758 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81eeda37-3cd9-4518-8561-9b414ec377e7-operator-scripts\") pod \"glance-7ef6-account-create-update-g9stz\" (UID: \"81eeda37-3cd9-4518-8561-9b414ec377e7\") " pod="openstack/glance-7ef6-account-create-update-g9stz" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.354618 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81eeda37-3cd9-4518-8561-9b414ec377e7-operator-scripts\") pod \"glance-7ef6-account-create-update-g9stz\" (UID: \"81eeda37-3cd9-4518-8561-9b414ec377e7\") " pod="openstack/glance-7ef6-account-create-update-g9stz" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.354980 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/031657e9-4699-435e-a8e2-0e2442a10dd0-operator-scripts\") pod \"glance-db-create-25mh8\" (UID: \"031657e9-4699-435e-a8e2-0e2442a10dd0\") " pod="openstack/glance-db-create-25mh8" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.565981 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p5bx\" (UniqueName: \"kubernetes.io/projected/031657e9-4699-435e-a8e2-0e2442a10dd0-kube-api-access-5p5bx\") pod \"glance-db-create-25mh8\" (UID: \"031657e9-4699-435e-a8e2-0e2442a10dd0\") " pod="openstack/glance-db-create-25mh8" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.569219 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsvlz\" (UniqueName: \"kubernetes.io/projected/81eeda37-3cd9-4518-8561-9b414ec377e7-kube-api-access-fsvlz\") pod \"glance-7ef6-account-create-update-g9stz\" (UID: \"81eeda37-3cd9-4518-8561-9b414ec377e7\") " pod="openstack/glance-7ef6-account-create-update-g9stz" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.596396 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7ef6-account-create-update-g9stz" Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.711575 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cf5-account-create-update-shwg2"] Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.753606 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-25mh8" Dec 01 09:50:10 crc kubenswrapper[4933]: W1201 09:50:10.785217 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49de438f_c2ca_4d52_a9ca_47fb8ef7ec81.slice/crio-dfd3e9c8e159fc49d40c3e4c564ae71c4273367e125655c6824a38fbc09d4008 WatchSource:0}: Error finding container dfd3e9c8e159fc49d40c3e4c564ae71c4273367e125655c6824a38fbc09d4008: Status 404 returned error can't find the container with id dfd3e9c8e159fc49d40c3e4c564ae71c4273367e125655c6824a38fbc09d4008 Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.994160 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cf5-account-create-update-shwg2" event={"ID":"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81","Type":"ContainerStarted","Data":"dfd3e9c8e159fc49d40c3e4c564ae71c4273367e125655c6824a38fbc09d4008"} Dec 01 09:50:10 crc kubenswrapper[4933]: I1201 09:50:10.994899 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" podUID="01a1a54d-3a8e-4dfd-aed1-d904670bce61" containerName="dnsmasq-dns" containerID="cri-o://1b96d933c47c5ae5c8c313dd7d9e4e4fdf5b45ccd7390a73f65a91152a6d3ae6" gracePeriod=10 Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.230785 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.319086 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89ghp\" (UniqueName: \"kubernetes.io/projected/cb7463c3-8d88-4334-8627-a4f62371faf8-kube-api-access-89ghp\") pod \"cb7463c3-8d88-4334-8627-a4f62371faf8\" (UID: \"cb7463c3-8d88-4334-8627-a4f62371faf8\") " Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.319343 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-dns-svc\") pod \"cb7463c3-8d88-4334-8627-a4f62371faf8\" (UID: \"cb7463c3-8d88-4334-8627-a4f62371faf8\") " Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.326641 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-config\") pod \"cb7463c3-8d88-4334-8627-a4f62371faf8\" (UID: \"cb7463c3-8d88-4334-8627-a4f62371faf8\") " Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.333066 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7463c3-8d88-4334-8627-a4f62371faf8-kube-api-access-89ghp" (OuterVolumeSpecName: "kube-api-access-89ghp") pod "cb7463c3-8d88-4334-8627-a4f62371faf8" (UID: "cb7463c3-8d88-4334-8627-a4f62371faf8"). InnerVolumeSpecName "kube-api-access-89ghp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:50:11 crc kubenswrapper[4933]: E1201 09:50:11.356709 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-config podName:cb7463c3-8d88-4334-8627-a4f62371faf8 nodeName:}" failed. No retries permitted until 2025-12-01 09:50:11.856649634 +0000 UTC m=+1102.498373279 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-config") pod "cb7463c3-8d88-4334-8627-a4f62371faf8" (UID: "cb7463c3-8d88-4334-8627-a4f62371faf8") : error deleting /var/lib/kubelet/pods/cb7463c3-8d88-4334-8627-a4f62371faf8/volume-subpaths: remove /var/lib/kubelet/pods/cb7463c3-8d88-4334-8627-a4f62371faf8/volume-subpaths: no such file or directory Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.358426 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb7463c3-8d88-4334-8627-a4f62371faf8" (UID: "cb7463c3-8d88-4334-8627-a4f62371faf8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.430107 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89ghp\" (UniqueName: \"kubernetes.io/projected/cb7463c3-8d88-4334-8627-a4f62371faf8-kube-api-access-89ghp\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.430162 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.478334 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.525417 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 01 09:50:11 crc kubenswrapper[4933]: W1201 09:50:11.603383 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1db280e5_ecd7_44cf_933a_2d55ba6f7b42.slice/crio-4deec934b7ce7ff6063b1c06bafe6b821d9cfbd7abae938e441365d559cc4f24 WatchSource:0}: Error finding container 4deec934b7ce7ff6063b1c06bafe6b821d9cfbd7abae938e441365d559cc4f24: Status 404 returned error can't find the container with id 4deec934b7ce7ff6063b1c06bafe6b821d9cfbd7abae938e441365d559cc4f24 Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.606883 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qxwjl"] Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.662940 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nrmcm"] Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.816769 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-7blxn"] Dec 01 09:50:11 crc kubenswrapper[4933]: E1201 09:50:11.817195 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7463c3-8d88-4334-8627-a4f62371faf8" containerName="init" Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.817210 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7463c3-8d88-4334-8627-a4f62371faf8" containerName="init" Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.817414 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7463c3-8d88-4334-8627-a4f62371faf8" containerName="init" Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.818646 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7blxn"] Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.818751 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.844672 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5xz5w"] Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.892645 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0fd3-account-create-update-cjld8"] Dec 01 09:50:11 crc kubenswrapper[4933]: W1201 09:50:11.932020 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ca0cee0_20d1_4fa4_9ea7_c7c84f0a4346.slice/crio-3e8fa44702baf6436bd2665f68d1edc10fcd0ea5347a8a13e7a3735b8ea3e7d3 WatchSource:0}: Error finding container 3e8fa44702baf6436bd2665f68d1edc10fcd0ea5347a8a13e7a3735b8ea3e7d3: Status 404 returned error can't find the container with id 3e8fa44702baf6436bd2665f68d1edc10fcd0ea5347a8a13e7a3735b8ea3e7d3 Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.958499 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-config\") pod \"cb7463c3-8d88-4334-8627-a4f62371faf8\" (UID: \"cb7463c3-8d88-4334-8627-a4f62371faf8\") " Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.958982 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9b9g\" (UniqueName: \"kubernetes.io/projected/4696ea4a-1bc2-4e1b-9209-67beddf255e8-kube-api-access-m9b9g\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.959029 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.959062 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.959102 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-dns-svc\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.959156 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-config\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:11 crc kubenswrapper[4933]: I1201 09:50:11.959409 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-config" (OuterVolumeSpecName: "config") pod "cb7463c3-8d88-4334-8627-a4f62371faf8" (UID: "cb7463c3-8d88-4334-8627-a4f62371faf8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.011061 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fd457"] Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.021908 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0fd3-account-create-update-cjld8" event={"ID":"5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346","Type":"ContainerStarted","Data":"3e8fa44702baf6436bd2665f68d1edc10fcd0ea5347a8a13e7a3735b8ea3e7d3"} Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.022341 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nrmcm"] Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.039990 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" event={"ID":"cb7463c3-8d88-4334-8627-a4f62371faf8","Type":"ContainerDied","Data":"8f1322c328882a3249562d54bf7b54189c82b83e3926ba59723d5d99571540e1"} Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.040071 4933 scope.go:117] "RemoveContainer" containerID="e68703bf27cbf2e6cbb613e44a4c6b5285d985ca68a617b2c17ab0d9dd077660" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.040081 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2z4vq" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.066129 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-dns-svc\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.066251 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-config\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.066581 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9b9g\" (UniqueName: \"kubernetes.io/projected/4696ea4a-1bc2-4e1b-9209-67beddf255e8-kube-api-access-m9b9g\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.066622 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.066680 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.066772 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7463c3-8d88-4334-8627-a4f62371faf8-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.067875 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.068919 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-dns-svc\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.069419 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.069532 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-config\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.073714 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7ef6-account-create-update-g9stz"] Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.074440 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9dc35964-1186-483a-8904-c98af6497c53","Type":"ContainerStarted","Data":"90298c5838ed1cd120d9bbbfd711df4503005e8ff47730ee040f7399035e9968"} Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.081680 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5xz5w" event={"ID":"61e85825-be78-4eba-9c52-3649968d0390","Type":"ContainerStarted","Data":"5ddac43a87887bcaf4082fd47e5fc3d1863dbe8ba2cc04814acb99218e7313a3"} Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.087849 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cf5-account-create-update-shwg2" event={"ID":"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81","Type":"ContainerStarted","Data":"2d616d91c201353db3642b292ecc5cd8140bcce146fce6646e58509494e34a0c"} Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.124632 4933 generic.go:334] "Generic (PLEG): container finished" podID="01a1a54d-3a8e-4dfd-aed1-d904670bce61" containerID="1b96d933c47c5ae5c8c313dd7d9e4e4fdf5b45ccd7390a73f65a91152a6d3ae6" exitCode=0 Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.124783 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" event={"ID":"01a1a54d-3a8e-4dfd-aed1-d904670bce61","Type":"ContainerDied","Data":"1b96d933c47c5ae5c8c313dd7d9e4e4fdf5b45ccd7390a73f65a91152a6d3ae6"} Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.131765 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6cf5-account-create-update-shwg2" podStartSLOduration=3.131684623 podStartE2EDuration="3.131684623s" podCreationTimestamp="2025-12-01 09:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:50:12.129154191 +0000 UTC m=+1102.770877806" watchObservedRunningTime="2025-12-01 09:50:12.131684623 +0000 UTC m=+1102.773408238" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.138454 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9b9g\" (UniqueName: \"kubernetes.io/projected/4696ea4a-1bc2-4e1b-9209-67beddf255e8-kube-api-access-m9b9g\") pod \"dnsmasq-dns-698758b865-7blxn\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.164989 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qxwjl" event={"ID":"1db280e5-ecd7-44cf-933a-2d55ba6f7b42","Type":"ContainerStarted","Data":"4deec934b7ce7ff6063b1c06bafe6b821d9cfbd7abae938e441365d559cc4f24"} Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.205509 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.225890 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2z4vq"] Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.236950 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2z4vq"] Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.245969 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-25mh8"] Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.314618 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.473422 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a1a54d-3a8e-4dfd-aed1-d904670bce61-config\") pod \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\" (UID: \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\") " Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.473486 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqbdx\" (UniqueName: \"kubernetes.io/projected/01a1a54d-3a8e-4dfd-aed1-d904670bce61-kube-api-access-lqbdx\") pod \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\" (UID: \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\") " Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.473662 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a1a54d-3a8e-4dfd-aed1-d904670bce61-dns-svc\") pod \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\" (UID: \"01a1a54d-3a8e-4dfd-aed1-d904670bce61\") " Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.490271 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a1a54d-3a8e-4dfd-aed1-d904670bce61-kube-api-access-lqbdx" (OuterVolumeSpecName: "kube-api-access-lqbdx") pod "01a1a54d-3a8e-4dfd-aed1-d904670bce61" (UID: "01a1a54d-3a8e-4dfd-aed1-d904670bce61"). InnerVolumeSpecName "kube-api-access-lqbdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.576979 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqbdx\" (UniqueName: \"kubernetes.io/projected/01a1a54d-3a8e-4dfd-aed1-d904670bce61-kube-api-access-lqbdx\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.608025 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a1a54d-3a8e-4dfd-aed1-d904670bce61-config" (OuterVolumeSpecName: "config") pod "01a1a54d-3a8e-4dfd-aed1-d904670bce61" (UID: "01a1a54d-3a8e-4dfd-aed1-d904670bce61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.681871 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a1a54d-3a8e-4dfd-aed1-d904670bce61-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.747494 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a1a54d-3a8e-4dfd-aed1-d904670bce61-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "01a1a54d-3a8e-4dfd-aed1-d904670bce61" (UID: "01a1a54d-3a8e-4dfd-aed1-d904670bce61"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.785218 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a1a54d-3a8e-4dfd-aed1-d904670bce61-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.796463 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 01 09:50:12 crc kubenswrapper[4933]: E1201 09:50:12.797251 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a1a54d-3a8e-4dfd-aed1-d904670bce61" containerName="dnsmasq-dns" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.797381 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a1a54d-3a8e-4dfd-aed1-d904670bce61" containerName="dnsmasq-dns" Dec 01 09:50:12 crc kubenswrapper[4933]: E1201 09:50:12.811863 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a1a54d-3a8e-4dfd-aed1-d904670bce61" containerName="init" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.811910 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a1a54d-3a8e-4dfd-aed1-d904670bce61" containerName="init" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.812434 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a1a54d-3a8e-4dfd-aed1-d904670bce61" containerName="dnsmasq-dns" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.848739 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.853051 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-6cbjq" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.853377 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.853686 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.864011 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.876272 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.989682 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7blxn"] Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.995286 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln2q7\" (UniqueName: \"kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-kube-api-access-ln2q7\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.995408 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/59f78861-3fff-42c4-9592-4eb047ea6a88-lock\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.995444 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/59f78861-3fff-42c4-9592-4eb047ea6a88-cache\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.995464 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:12 crc kubenswrapper[4933]: I1201 09:50:12.995602 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.097196 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln2q7\" (UniqueName: \"kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-kube-api-access-ln2q7\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.097270 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/59f78861-3fff-42c4-9592-4eb047ea6a88-lock\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.097329 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/59f78861-3fff-42c4-9592-4eb047ea6a88-cache\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.097360 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.097540 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:13 crc kubenswrapper[4933]: E1201 09:50:13.097744 4933 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 09:50:13 crc kubenswrapper[4933]: E1201 09:50:13.097767 4933 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 09:50:13 crc kubenswrapper[4933]: E1201 09:50:13.097837 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift podName:59f78861-3fff-42c4-9592-4eb047ea6a88 nodeName:}" failed. No retries permitted until 2025-12-01 09:50:13.597811252 +0000 UTC m=+1104.239534867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift") pod "swift-storage-0" (UID: "59f78861-3fff-42c4-9592-4eb047ea6a88") : configmap "swift-ring-files" not found Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.098486 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.098604 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/59f78861-3fff-42c4-9592-4eb047ea6a88-lock\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.098812 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/59f78861-3fff-42c4-9592-4eb047ea6a88-cache\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.121182 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln2q7\" (UniqueName: \"kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-kube-api-access-ln2q7\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.132951 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.178741 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-25mh8" event={"ID":"031657e9-4699-435e-a8e2-0e2442a10dd0","Type":"ContainerStarted","Data":"3a7e4c88cff28bcd149cb1d01fe204c79701912dcebf9c07f68e2a4dff787061"} Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.178795 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-25mh8" event={"ID":"031657e9-4699-435e-a8e2-0e2442a10dd0","Type":"ContainerStarted","Data":"be6b236c28e37be2b9425a29f17a23caab8cdf9b1389af1673c144b6df96b082"} Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.182189 4933 generic.go:334] "Generic (PLEG): container finished" podID="61e85825-be78-4eba-9c52-3649968d0390" containerID="c1356040b71202ddb733101737308a02253053e6aa3f78c22036469f9927234f" exitCode=0 Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.182318 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5xz5w" event={"ID":"61e85825-be78-4eba-9c52-3649968d0390","Type":"ContainerDied","Data":"c1356040b71202ddb733101737308a02253053e6aa3f78c22036469f9927234f"} Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.187187 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qxwjl" event={"ID":"1db280e5-ecd7-44cf-933a-2d55ba6f7b42","Type":"ContainerStarted","Data":"a3432bd1b3cba6dfef0cb1061d388866809b005d2c728c3a4b4171459b03d8a0"} Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.190395 4933 generic.go:334] "Generic (PLEG): container finished" podID="a128d87d-63cd-4f75-8f47-5bb700c496f3" containerID="83770231a89ae1ba118c8ae02796d04342c59da9585bb8c29209a6e911c18311" exitCode=0 Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.190495 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" event={"ID":"a128d87d-63cd-4f75-8f47-5bb700c496f3","Type":"ContainerDied","Data":"83770231a89ae1ba118c8ae02796d04342c59da9585bb8c29209a6e911c18311"} Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.190539 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" event={"ID":"a128d87d-63cd-4f75-8f47-5bb700c496f3","Type":"ContainerStarted","Data":"e97be7ec5bc4bb4eae3fc13135714e013b3362ccf55df12d1b8c76b5d5b8694f"} Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.193111 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wpgrh"] Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.194474 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.196817 4933 generic.go:334] "Generic (PLEG): container finished" podID="5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346" containerID="06968cacc0b4753e6639222a2d977d207844f5c87f1532734a82db20f5c18026" exitCode=0 Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.196936 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0fd3-account-create-update-cjld8" event={"ID":"5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346","Type":"ContainerDied","Data":"06968cacc0b4753e6639222a2d977d207844f5c87f1532734a82db20f5c18026"} Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.196842 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.197551 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.198018 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.208112 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7ef6-account-create-update-g9stz" event={"ID":"81eeda37-3cd9-4518-8561-9b414ec377e7","Type":"ContainerStarted","Data":"ac428dd94534f970a053a9e0930f9c1705565e3c2a2d1f1532742e5a28d8ce68"} Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.208186 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7ef6-account-create-update-g9stz" event={"ID":"81eeda37-3cd9-4518-8561-9b414ec377e7","Type":"ContainerStarted","Data":"7e4ae55537aa5cf4e42a794b663869c59fdf458fc230e3b2d4f4c0414b3dc75c"} Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.232018 4933 generic.go:334] "Generic (PLEG): container finished" podID="49de438f-c2ca-4d52-a9ca-47fb8ef7ec81" containerID="2d616d91c201353db3642b292ecc5cd8140bcce146fce6646e58509494e34a0c" exitCode=0 Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.232204 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cf5-account-create-update-shwg2" event={"ID":"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81","Type":"ContainerDied","Data":"2d616d91c201353db3642b292ecc5cd8140bcce146fce6646e58509494e34a0c"} Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.234892 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wpgrh"] Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.241907 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-25mh8" podStartSLOduration=4.24187604 podStartE2EDuration="4.24187604s" podCreationTimestamp="2025-12-01 09:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:50:13.209878167 +0000 UTC m=+1103.851601782" watchObservedRunningTime="2025-12-01 09:50:13.24187604 +0000 UTC m=+1103.883599655" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.245116 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" event={"ID":"01a1a54d-3a8e-4dfd-aed1-d904670bce61","Type":"ContainerDied","Data":"d0887282ea0ab4b31ecd152821667507267cb6fe5763504c1e9031c768550184"} Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.245198 4933 scope.go:117] "RemoveContainer" containerID="1b96d933c47c5ae5c8c313dd7d9e4e4fdf5b45ccd7390a73f65a91152a6d3ae6" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.245435 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qz445" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.256070 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-qxwjl" podStartSLOduration=4.256041157 podStartE2EDuration="4.256041157s" podCreationTimestamp="2025-12-01 09:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:50:13.250688286 +0000 UTC m=+1103.892411901" watchObservedRunningTime="2025-12-01 09:50:13.256041157 +0000 UTC m=+1103.897764772" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.262599 4933 generic.go:334] "Generic (PLEG): container finished" podID="51b55cac-ddde-4a68-a081-d3e34e4b39fc" containerID="138bf4daa34dc6391fc705e4b32af6cb5f0099ba2d148007ac7cc4d4be0ae3cf" exitCode=0 Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.262930 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fd457" event={"ID":"51b55cac-ddde-4a68-a081-d3e34e4b39fc","Type":"ContainerDied","Data":"138bf4daa34dc6391fc705e4b32af6cb5f0099ba2d148007ac7cc4d4be0ae3cf"} Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.262975 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fd457" event={"ID":"51b55cac-ddde-4a68-a081-d3e34e4b39fc","Type":"ContainerStarted","Data":"7f192fcd568d0d3c29142e265a6e82ef449edddf97ca15af750d77a0ffcb9c2d"} Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.304557 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa13e197-3320-4314-86ee-a1b90292ab1d-etc-swift\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.304664 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa13e197-3320-4314-86ee-a1b90292ab1d-ring-data-devices\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.304728 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa13e197-3320-4314-86ee-a1b90292ab1d-scripts\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.304826 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-combined-ca-bundle\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.304855 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-dispersionconf\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.304886 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-swiftconf\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.304985 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b8db\" (UniqueName: \"kubernetes.io/projected/fa13e197-3320-4314-86ee-a1b90292ab1d-kube-api-access-9b8db\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.325468 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7ef6-account-create-update-g9stz" podStartSLOduration=3.325431257 podStartE2EDuration="3.325431257s" podCreationTimestamp="2025-12-01 09:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:50:13.304107814 +0000 UTC m=+1103.945831439" watchObservedRunningTime="2025-12-01 09:50:13.325431257 +0000 UTC m=+1103.967154872" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.406336 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa13e197-3320-4314-86ee-a1b90292ab1d-etc-swift\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.406435 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa13e197-3320-4314-86ee-a1b90292ab1d-ring-data-devices\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.406489 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa13e197-3320-4314-86ee-a1b90292ab1d-scripts\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.406531 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-combined-ca-bundle\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.406660 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-dispersionconf\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.406696 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-swiftconf\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.406962 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b8db\" (UniqueName: \"kubernetes.io/projected/fa13e197-3320-4314-86ee-a1b90292ab1d-kube-api-access-9b8db\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.407868 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa13e197-3320-4314-86ee-a1b90292ab1d-etc-swift\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.408702 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa13e197-3320-4314-86ee-a1b90292ab1d-ring-data-devices\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.410568 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa13e197-3320-4314-86ee-a1b90292ab1d-scripts\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.439347 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-swiftconf\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.443819 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-dispersionconf\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.461800 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b8db\" (UniqueName: \"kubernetes.io/projected/fa13e197-3320-4314-86ee-a1b90292ab1d-kube-api-access-9b8db\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.462247 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-combined-ca-bundle\") pod \"swift-ring-rebalance-wpgrh\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.544793 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.567376 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qz445"] Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.613273 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:13 crc kubenswrapper[4933]: E1201 09:50:13.613552 4933 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 09:50:13 crc kubenswrapper[4933]: E1201 09:50:13.613567 4933 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 09:50:13 crc kubenswrapper[4933]: E1201 09:50:13.613608 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift podName:59f78861-3fff-42c4-9592-4eb047ea6a88 nodeName:}" failed. No retries permitted until 2025-12-01 09:50:14.613594113 +0000 UTC m=+1105.255317728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift") pod "swift-storage-0" (UID: "59f78861-3fff-42c4-9592-4eb047ea6a88") : configmap "swift-ring-files" not found Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.636602 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qz445"] Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.783605 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a1a54d-3a8e-4dfd-aed1-d904670bce61" path="/var/lib/kubelet/pods/01a1a54d-3a8e-4dfd-aed1-d904670bce61/volumes" Dec 01 09:50:13 crc kubenswrapper[4933]: I1201 09:50:13.784239 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb7463c3-8d88-4334-8627-a4f62371faf8" path="/var/lib/kubelet/pods/cb7463c3-8d88-4334-8627-a4f62371faf8/volumes" Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.100044 4933 scope.go:117] "RemoveContainer" containerID="9fcbc98d727a0c65b51905eb06ecb8fda5c5d600f3a38f4c83ccc6d8f93935c1" Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.262611 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.311927 4933 generic.go:334] "Generic (PLEG): container finished" podID="031657e9-4699-435e-a8e2-0e2442a10dd0" containerID="3a7e4c88cff28bcd149cb1d01fe204c79701912dcebf9c07f68e2a4dff787061" exitCode=0 Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.312002 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-25mh8" event={"ID":"031657e9-4699-435e-a8e2-0e2442a10dd0","Type":"ContainerDied","Data":"3a7e4c88cff28bcd149cb1d01fe204c79701912dcebf9c07f68e2a4dff787061"} Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.314099 4933 generic.go:334] "Generic (PLEG): container finished" podID="81eeda37-3cd9-4518-8561-9b414ec377e7" containerID="ac428dd94534f970a053a9e0930f9c1705565e3c2a2d1f1532742e5a28d8ce68" exitCode=0 Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.314140 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7ef6-account-create-update-g9stz" event={"ID":"81eeda37-3cd9-4518-8561-9b414ec377e7","Type":"ContainerDied","Data":"ac428dd94534f970a053a9e0930f9c1705565e3c2a2d1f1532742e5a28d8ce68"} Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.329027 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.329601 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-nrmcm" event={"ID":"a128d87d-63cd-4f75-8f47-5bb700c496f3","Type":"ContainerDied","Data":"e97be7ec5bc4bb4eae3fc13135714e013b3362ccf55df12d1b8c76b5d5b8694f"} Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.329666 4933 scope.go:117] "RemoveContainer" containerID="83770231a89ae1ba118c8ae02796d04342c59da9585bb8c29209a6e911c18311" Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.353720 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7blxn" event={"ID":"4696ea4a-1bc2-4e1b-9209-67beddf255e8","Type":"ContainerStarted","Data":"bf051854a8ddcbca2e2e53ae3b67e54860e646109b47733fb2457e2b7e1a2faa"} Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.439640 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-config\") pod \"a128d87d-63cd-4f75-8f47-5bb700c496f3\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.439796 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-ovsdbserver-nb\") pod \"a128d87d-63cd-4f75-8f47-5bb700c496f3\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.440254 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-dns-svc\") pod \"a128d87d-63cd-4f75-8f47-5bb700c496f3\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.440517 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5md4\" (UniqueName: \"kubernetes.io/projected/a128d87d-63cd-4f75-8f47-5bb700c496f3-kube-api-access-k5md4\") pod \"a128d87d-63cd-4f75-8f47-5bb700c496f3\" (UID: \"a128d87d-63cd-4f75-8f47-5bb700c496f3\") " Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.461994 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a128d87d-63cd-4f75-8f47-5bb700c496f3-kube-api-access-k5md4" (OuterVolumeSpecName: "kube-api-access-k5md4") pod "a128d87d-63cd-4f75-8f47-5bb700c496f3" (UID: "a128d87d-63cd-4f75-8f47-5bb700c496f3"). InnerVolumeSpecName "kube-api-access-k5md4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.479559 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-config" (OuterVolumeSpecName: "config") pod "a128d87d-63cd-4f75-8f47-5bb700c496f3" (UID: "a128d87d-63cd-4f75-8f47-5bb700c496f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.495397 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a128d87d-63cd-4f75-8f47-5bb700c496f3" (UID: "a128d87d-63cd-4f75-8f47-5bb700c496f3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.503114 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a128d87d-63cd-4f75-8f47-5bb700c496f3" (UID: "a128d87d-63cd-4f75-8f47-5bb700c496f3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.544331 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.544992 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.545019 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a128d87d-63cd-4f75-8f47-5bb700c496f3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.545055 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5md4\" (UniqueName: \"kubernetes.io/projected/a128d87d-63cd-4f75-8f47-5bb700c496f3-kube-api-access-k5md4\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.646787 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:14 crc kubenswrapper[4933]: E1201 09:50:14.647163 4933 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 09:50:14 crc kubenswrapper[4933]: E1201 09:50:14.647442 4933 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 09:50:14 crc kubenswrapper[4933]: E1201 09:50:14.647545 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift podName:59f78861-3fff-42c4-9592-4eb047ea6a88 nodeName:}" failed. No retries permitted until 2025-12-01 09:50:16.647513123 +0000 UTC m=+1107.289236738 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift") pod "swift-storage-0" (UID: "59f78861-3fff-42c4-9592-4eb047ea6a88") : configmap "swift-ring-files" not found Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.718424 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wpgrh"] Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.730558 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nrmcm"] Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.738630 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nrmcm"] Dec 01 09:50:14 crc kubenswrapper[4933]: W1201 09:50:14.742985 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa13e197_3320_4314_86ee_a1b90292ab1d.slice/crio-c588903cb169f0b18c8df3802258305ddd08d41f28822d804cb1221e3150e4a5 WatchSource:0}: Error finding container c588903cb169f0b18c8df3802258305ddd08d41f28822d804cb1221e3150e4a5: Status 404 returned error can't find the container with id c588903cb169f0b18c8df3802258305ddd08d41f28822d804cb1221e3150e4a5 Dec 01 09:50:14 crc kubenswrapper[4933]: I1201 09:50:14.882646 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0fd3-account-create-update-cjld8" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.003549 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cf5-account-create-update-shwg2" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.018495 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5xz5w" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.054409 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346-operator-scripts\") pod \"5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346\" (UID: \"5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346\") " Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.054571 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c69wp\" (UniqueName: \"kubernetes.io/projected/5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346-kube-api-access-c69wp\") pod \"5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346\" (UID: \"5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346\") " Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.055828 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346" (UID: "5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.058982 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346-kube-api-access-c69wp" (OuterVolumeSpecName: "kube-api-access-c69wp") pod "5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346" (UID: "5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346"). InnerVolumeSpecName "kube-api-access-c69wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.156622 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzr4c\" (UniqueName: \"kubernetes.io/projected/61e85825-be78-4eba-9c52-3649968d0390-kube-api-access-nzr4c\") pod \"61e85825-be78-4eba-9c52-3649968d0390\" (UID: \"61e85825-be78-4eba-9c52-3649968d0390\") " Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.156695 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61e85825-be78-4eba-9c52-3649968d0390-operator-scripts\") pod \"61e85825-be78-4eba-9c52-3649968d0390\" (UID: \"61e85825-be78-4eba-9c52-3649968d0390\") " Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.156829 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrnnm\" (UniqueName: \"kubernetes.io/projected/49de438f-c2ca-4d52-a9ca-47fb8ef7ec81-kube-api-access-xrnnm\") pod \"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81\" (UID: \"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81\") " Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.156859 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49de438f-c2ca-4d52-a9ca-47fb8ef7ec81-operator-scripts\") pod \"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81\" (UID: \"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81\") " Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.157666 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.157695 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c69wp\" (UniqueName: \"kubernetes.io/projected/5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346-kube-api-access-c69wp\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.157936 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e85825-be78-4eba-9c52-3649968d0390-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61e85825-be78-4eba-9c52-3649968d0390" (UID: "61e85825-be78-4eba-9c52-3649968d0390"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.158352 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49de438f-c2ca-4d52-a9ca-47fb8ef7ec81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49de438f-c2ca-4d52-a9ca-47fb8ef7ec81" (UID: "49de438f-c2ca-4d52-a9ca-47fb8ef7ec81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.166739 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49de438f-c2ca-4d52-a9ca-47fb8ef7ec81-kube-api-access-xrnnm" (OuterVolumeSpecName: "kube-api-access-xrnnm") pod "49de438f-c2ca-4d52-a9ca-47fb8ef7ec81" (UID: "49de438f-c2ca-4d52-a9ca-47fb8ef7ec81"). InnerVolumeSpecName "kube-api-access-xrnnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.167172 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61e85825-be78-4eba-9c52-3649968d0390-kube-api-access-nzr4c" (OuterVolumeSpecName: "kube-api-access-nzr4c") pod "61e85825-be78-4eba-9c52-3649968d0390" (UID: "61e85825-be78-4eba-9c52-3649968d0390"). InnerVolumeSpecName "kube-api-access-nzr4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.259960 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzr4c\" (UniqueName: \"kubernetes.io/projected/61e85825-be78-4eba-9c52-3649968d0390-kube-api-access-nzr4c\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.260024 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61e85825-be78-4eba-9c52-3649968d0390-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.260040 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrnnm\" (UniqueName: \"kubernetes.io/projected/49de438f-c2ca-4d52-a9ca-47fb8ef7ec81-kube-api-access-xrnnm\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.260051 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49de438f-c2ca-4d52-a9ca-47fb8ef7ec81-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.364522 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5xz5w" event={"ID":"61e85825-be78-4eba-9c52-3649968d0390","Type":"ContainerDied","Data":"5ddac43a87887bcaf4082fd47e5fc3d1863dbe8ba2cc04814acb99218e7313a3"} Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.364567 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5xz5w" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.364592 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ddac43a87887bcaf4082fd47e5fc3d1863dbe8ba2cc04814acb99218e7313a3" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.366862 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cf5-account-create-update-shwg2" event={"ID":"49de438f-c2ca-4d52-a9ca-47fb8ef7ec81","Type":"ContainerDied","Data":"dfd3e9c8e159fc49d40c3e4c564ae71c4273367e125655c6824a38fbc09d4008"} Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.366895 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd3e9c8e159fc49d40c3e4c564ae71c4273367e125655c6824a38fbc09d4008" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.366913 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cf5-account-create-update-shwg2" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.372499 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fd457" event={"ID":"51b55cac-ddde-4a68-a081-d3e34e4b39fc","Type":"ContainerStarted","Data":"ca0659e1c20dee73606fea6fffd360466f2cb282ace77ad019ba12c8211b640e"} Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.373858 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.377829 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9dc35964-1186-483a-8904-c98af6497c53","Type":"ContainerStarted","Data":"62dcbb86c5faff559b961946b2325e58847510015f0e24851a8d9be8c37a676a"} Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.377880 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9dc35964-1186-483a-8904-c98af6497c53","Type":"ContainerStarted","Data":"ba2407f71786322f1b0929f1f177ae095c9b64742a9424684b8adac9e5cd6c2b"} Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.378405 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.386912 4933 generic.go:334] "Generic (PLEG): container finished" podID="4696ea4a-1bc2-4e1b-9209-67beddf255e8" containerID="3bd15b55c6aadbc003639be05b6b40ba850298791fb4361fc13e3e372b586ad9" exitCode=0 Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.387000 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7blxn" event={"ID":"4696ea4a-1bc2-4e1b-9209-67beddf255e8","Type":"ContainerDied","Data":"3bd15b55c6aadbc003639be05b6b40ba850298791fb4361fc13e3e372b586ad9"} Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.389728 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0fd3-account-create-update-cjld8" event={"ID":"5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346","Type":"ContainerDied","Data":"3e8fa44702baf6436bd2665f68d1edc10fcd0ea5347a8a13e7a3735b8ea3e7d3"} Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.389784 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e8fa44702baf6436bd2665f68d1edc10fcd0ea5347a8a13e7a3735b8ea3e7d3" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.389831 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0fd3-account-create-update-cjld8" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.391554 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wpgrh" event={"ID":"fa13e197-3320-4314-86ee-a1b90292ab1d","Type":"ContainerStarted","Data":"c588903cb169f0b18c8df3802258305ddd08d41f28822d804cb1221e3150e4a5"} Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.416782 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-fd457" podStartSLOduration=6.41674796 podStartE2EDuration="6.41674796s" podCreationTimestamp="2025-12-01 09:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:50:15.399450237 +0000 UTC m=+1106.041173852" watchObservedRunningTime="2025-12-01 09:50:15.41674796 +0000 UTC m=+1106.058471575" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.683790 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a128d87d-63cd-4f75-8f47-5bb700c496f3" path="/var/lib/kubelet/pods/a128d87d-63cd-4f75-8f47-5bb700c496f3/volumes" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.770544 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-25mh8" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.796947 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.111353164 podStartE2EDuration="6.79692199s" podCreationTimestamp="2025-12-01 09:50:09 +0000 UTC" firstStartedPulling="2025-12-01 09:50:11.498653851 +0000 UTC m=+1102.140377466" lastFinishedPulling="2025-12-01 09:50:14.184222677 +0000 UTC m=+1104.825946292" observedRunningTime="2025-12-01 09:50:15.472212038 +0000 UTC m=+1106.113935663" watchObservedRunningTime="2025-12-01 09:50:15.79692199 +0000 UTC m=+1106.438645605" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.873724 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7ef6-account-create-update-g9stz" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.877426 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p5bx\" (UniqueName: \"kubernetes.io/projected/031657e9-4699-435e-a8e2-0e2442a10dd0-kube-api-access-5p5bx\") pod \"031657e9-4699-435e-a8e2-0e2442a10dd0\" (UID: \"031657e9-4699-435e-a8e2-0e2442a10dd0\") " Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.877696 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/031657e9-4699-435e-a8e2-0e2442a10dd0-operator-scripts\") pod \"031657e9-4699-435e-a8e2-0e2442a10dd0\" (UID: \"031657e9-4699-435e-a8e2-0e2442a10dd0\") " Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.878770 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/031657e9-4699-435e-a8e2-0e2442a10dd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "031657e9-4699-435e-a8e2-0e2442a10dd0" (UID: "031657e9-4699-435e-a8e2-0e2442a10dd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.879808 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/031657e9-4699-435e-a8e2-0e2442a10dd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.891151 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/031657e9-4699-435e-a8e2-0e2442a10dd0-kube-api-access-5p5bx" (OuterVolumeSpecName: "kube-api-access-5p5bx") pod "031657e9-4699-435e-a8e2-0e2442a10dd0" (UID: "031657e9-4699-435e-a8e2-0e2442a10dd0"). InnerVolumeSpecName "kube-api-access-5p5bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.981085 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81eeda37-3cd9-4518-8561-9b414ec377e7-operator-scripts\") pod \"81eeda37-3cd9-4518-8561-9b414ec377e7\" (UID: \"81eeda37-3cd9-4518-8561-9b414ec377e7\") " Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.981567 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsvlz\" (UniqueName: \"kubernetes.io/projected/81eeda37-3cd9-4518-8561-9b414ec377e7-kube-api-access-fsvlz\") pod \"81eeda37-3cd9-4518-8561-9b414ec377e7\" (UID: \"81eeda37-3cd9-4518-8561-9b414ec377e7\") " Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.982024 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p5bx\" (UniqueName: \"kubernetes.io/projected/031657e9-4699-435e-a8e2-0e2442a10dd0-kube-api-access-5p5bx\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.982602 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81eeda37-3cd9-4518-8561-9b414ec377e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81eeda37-3cd9-4518-8561-9b414ec377e7" (UID: "81eeda37-3cd9-4518-8561-9b414ec377e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:15 crc kubenswrapper[4933]: I1201 09:50:15.985663 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81eeda37-3cd9-4518-8561-9b414ec377e7-kube-api-access-fsvlz" (OuterVolumeSpecName: "kube-api-access-fsvlz") pod "81eeda37-3cd9-4518-8561-9b414ec377e7" (UID: "81eeda37-3cd9-4518-8561-9b414ec377e7"). InnerVolumeSpecName "kube-api-access-fsvlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:50:16 crc kubenswrapper[4933]: I1201 09:50:16.083979 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsvlz\" (UniqueName: \"kubernetes.io/projected/81eeda37-3cd9-4518-8561-9b414ec377e7-kube-api-access-fsvlz\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:16 crc kubenswrapper[4933]: I1201 09:50:16.084465 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81eeda37-3cd9-4518-8561-9b414ec377e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:16 crc kubenswrapper[4933]: I1201 09:50:16.420190 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-25mh8" event={"ID":"031657e9-4699-435e-a8e2-0e2442a10dd0","Type":"ContainerDied","Data":"be6b236c28e37be2b9425a29f17a23caab8cdf9b1389af1673c144b6df96b082"} Dec 01 09:50:16 crc kubenswrapper[4933]: I1201 09:50:16.420266 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be6b236c28e37be2b9425a29f17a23caab8cdf9b1389af1673c144b6df96b082" Dec 01 09:50:16 crc kubenswrapper[4933]: I1201 09:50:16.420395 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-25mh8" Dec 01 09:50:16 crc kubenswrapper[4933]: I1201 09:50:16.429835 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7ef6-account-create-update-g9stz" event={"ID":"81eeda37-3cd9-4518-8561-9b414ec377e7","Type":"ContainerDied","Data":"7e4ae55537aa5cf4e42a794b663869c59fdf458fc230e3b2d4f4c0414b3dc75c"} Dec 01 09:50:16 crc kubenswrapper[4933]: I1201 09:50:16.429905 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e4ae55537aa5cf4e42a794b663869c59fdf458fc230e3b2d4f4c0414b3dc75c" Dec 01 09:50:16 crc kubenswrapper[4933]: I1201 09:50:16.429999 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7ef6-account-create-update-g9stz" Dec 01 09:50:16 crc kubenswrapper[4933]: I1201 09:50:16.443405 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7blxn" event={"ID":"4696ea4a-1bc2-4e1b-9209-67beddf255e8","Type":"ContainerStarted","Data":"381a2f73e98940958a70a494e091cbc33ed05279cb6bb15e5418fe0fc4ffcd89"} Dec 01 09:50:16 crc kubenswrapper[4933]: I1201 09:50:16.443527 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:16 crc kubenswrapper[4933]: I1201 09:50:16.494189 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-7blxn" podStartSLOduration=5.494152935 podStartE2EDuration="5.494152935s" podCreationTimestamp="2025-12-01 09:50:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:50:16.48864406 +0000 UTC m=+1107.130367675" watchObservedRunningTime="2025-12-01 09:50:16.494152935 +0000 UTC m=+1107.135876550" Dec 01 09:50:16 crc kubenswrapper[4933]: I1201 09:50:16.720642 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:16 crc kubenswrapper[4933]: E1201 09:50:16.721777 4933 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 09:50:16 crc kubenswrapper[4933]: E1201 09:50:16.721807 4933 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 09:50:16 crc kubenswrapper[4933]: E1201 09:50:16.721851 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift podName:59f78861-3fff-42c4-9592-4eb047ea6a88 nodeName:}" failed. No retries permitted until 2025-12-01 09:50:20.72183412 +0000 UTC m=+1111.363557735 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift") pod "swift-storage-0" (UID: "59f78861-3fff-42c4-9592-4eb047ea6a88") : configmap "swift-ring-files" not found Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.065492 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fq449"] Dec 01 09:50:19 crc kubenswrapper[4933]: E1201 09:50:19.066752 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a128d87d-63cd-4f75-8f47-5bb700c496f3" containerName="init" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.066773 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a128d87d-63cd-4f75-8f47-5bb700c496f3" containerName="init" Dec 01 09:50:19 crc kubenswrapper[4933]: E1201 09:50:19.066815 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e85825-be78-4eba-9c52-3649968d0390" containerName="mariadb-database-create" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.066825 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e85825-be78-4eba-9c52-3649968d0390" containerName="mariadb-database-create" Dec 01 09:50:19 crc kubenswrapper[4933]: E1201 09:50:19.066843 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031657e9-4699-435e-a8e2-0e2442a10dd0" containerName="mariadb-database-create" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.066851 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="031657e9-4699-435e-a8e2-0e2442a10dd0" containerName="mariadb-database-create" Dec 01 09:50:19 crc kubenswrapper[4933]: E1201 09:50:19.066899 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346" containerName="mariadb-account-create-update" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.066908 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346" containerName="mariadb-account-create-update" Dec 01 09:50:19 crc kubenswrapper[4933]: E1201 09:50:19.066926 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49de438f-c2ca-4d52-a9ca-47fb8ef7ec81" containerName="mariadb-account-create-update" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.066942 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="49de438f-c2ca-4d52-a9ca-47fb8ef7ec81" containerName="mariadb-account-create-update" Dec 01 09:50:19 crc kubenswrapper[4933]: E1201 09:50:19.066958 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81eeda37-3cd9-4518-8561-9b414ec377e7" containerName="mariadb-account-create-update" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.066967 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="81eeda37-3cd9-4518-8561-9b414ec377e7" containerName="mariadb-account-create-update" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.067180 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e85825-be78-4eba-9c52-3649968d0390" containerName="mariadb-database-create" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.067195 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="81eeda37-3cd9-4518-8561-9b414ec377e7" containerName="mariadb-account-create-update" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.067205 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="031657e9-4699-435e-a8e2-0e2442a10dd0" containerName="mariadb-database-create" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.067222 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="a128d87d-63cd-4f75-8f47-5bb700c496f3" containerName="init" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.067234 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346" containerName="mariadb-account-create-update" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.067249 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="49de438f-c2ca-4d52-a9ca-47fb8ef7ec81" containerName="mariadb-account-create-update" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.068044 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fq449" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.129542 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fq449"] Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.193192 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f441a07b-df6a-4d0b-b8d5-fec3698ff0d4-operator-scripts\") pod \"keystone-db-create-fq449\" (UID: \"f441a07b-df6a-4d0b-b8d5-fec3698ff0d4\") " pod="openstack/keystone-db-create-fq449" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.194075 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db458\" (UniqueName: \"kubernetes.io/projected/f441a07b-df6a-4d0b-b8d5-fec3698ff0d4-kube-api-access-db458\") pod \"keystone-db-create-fq449\" (UID: \"f441a07b-df6a-4d0b-b8d5-fec3698ff0d4\") " pod="openstack/keystone-db-create-fq449" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.296237 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db458\" (UniqueName: \"kubernetes.io/projected/f441a07b-df6a-4d0b-b8d5-fec3698ff0d4-kube-api-access-db458\") pod \"keystone-db-create-fq449\" (UID: \"f441a07b-df6a-4d0b-b8d5-fec3698ff0d4\") " pod="openstack/keystone-db-create-fq449" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.296439 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f441a07b-df6a-4d0b-b8d5-fec3698ff0d4-operator-scripts\") pod \"keystone-db-create-fq449\" (UID: \"f441a07b-df6a-4d0b-b8d5-fec3698ff0d4\") " pod="openstack/keystone-db-create-fq449" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.297505 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f441a07b-df6a-4d0b-b8d5-fec3698ff0d4-operator-scripts\") pod \"keystone-db-create-fq449\" (UID: \"f441a07b-df6a-4d0b-b8d5-fec3698ff0d4\") " pod="openstack/keystone-db-create-fq449" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.318231 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db458\" (UniqueName: \"kubernetes.io/projected/f441a07b-df6a-4d0b-b8d5-fec3698ff0d4-kube-api-access-db458\") pod \"keystone-db-create-fq449\" (UID: \"f441a07b-df6a-4d0b-b8d5-fec3698ff0d4\") " pod="openstack/keystone-db-create-fq449" Dec 01 09:50:19 crc kubenswrapper[4933]: I1201 09:50:19.427052 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fq449" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.163503 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.293263 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kxbq6"] Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.294559 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.299297 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.299411 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-shxmv" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.309474 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kxbq6"] Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.416468 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-db-sync-config-data\") pod \"glance-db-sync-kxbq6\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.416611 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-combined-ca-bundle\") pod \"glance-db-sync-kxbq6\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.416645 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-config-data\") pod \"glance-db-sync-kxbq6\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.416692 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv8p2\" (UniqueName: \"kubernetes.io/projected/3c54760e-eb3b-4ad1-a6ee-3c494878c668-kube-api-access-xv8p2\") pod \"glance-db-sync-kxbq6\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.518536 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv8p2\" (UniqueName: \"kubernetes.io/projected/3c54760e-eb3b-4ad1-a6ee-3c494878c668-kube-api-access-xv8p2\") pod \"glance-db-sync-kxbq6\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.518652 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-db-sync-config-data\") pod \"glance-db-sync-kxbq6\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.518772 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-combined-ca-bundle\") pod \"glance-db-sync-kxbq6\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.518806 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-config-data\") pod \"glance-db-sync-kxbq6\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.527985 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-db-sync-config-data\") pod \"glance-db-sync-kxbq6\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.528195 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-combined-ca-bundle\") pod \"glance-db-sync-kxbq6\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.529504 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-config-data\") pod \"glance-db-sync-kxbq6\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.539771 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv8p2\" (UniqueName: \"kubernetes.io/projected/3c54760e-eb3b-4ad1-a6ee-3c494878c668-kube-api-access-xv8p2\") pod \"glance-db-sync-kxbq6\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.627063 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kxbq6" Dec 01 09:50:20 crc kubenswrapper[4933]: I1201 09:50:20.722901 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:20 crc kubenswrapper[4933]: E1201 09:50:20.723200 4933 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 09:50:20 crc kubenswrapper[4933]: E1201 09:50:20.723218 4933 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 09:50:20 crc kubenswrapper[4933]: E1201 09:50:20.723261 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift podName:59f78861-3fff-42c4-9592-4eb047ea6a88 nodeName:}" failed. No retries permitted until 2025-12-01 09:50:28.723247659 +0000 UTC m=+1119.364971274 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift") pod "swift-storage-0" (UID: "59f78861-3fff-42c4-9592-4eb047ea6a88") : configmap "swift-ring-files" not found Dec 01 09:50:21 crc kubenswrapper[4933]: I1201 09:50:21.490562 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wpgrh" event={"ID":"fa13e197-3320-4314-86ee-a1b90292ab1d","Type":"ContainerStarted","Data":"648aa6bc24969ed3afd745d565cd3f58bb019b5bcce9e8ce2103d766191dd901"} Dec 01 09:50:21 crc kubenswrapper[4933]: I1201 09:50:21.778124 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fq449"] Dec 01 09:50:21 crc kubenswrapper[4933]: I1201 09:50:21.802106 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kxbq6"] Dec 01 09:50:22 crc kubenswrapper[4933]: I1201 09:50:22.210619 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:50:22 crc kubenswrapper[4933]: I1201 09:50:22.342140 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fd457"] Dec 01 09:50:22 crc kubenswrapper[4933]: I1201 09:50:22.342572 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-fd457" podUID="51b55cac-ddde-4a68-a081-d3e34e4b39fc" containerName="dnsmasq-dns" containerID="cri-o://ca0659e1c20dee73606fea6fffd360466f2cb282ace77ad019ba12c8211b640e" gracePeriod=10 Dec 01 09:50:22 crc kubenswrapper[4933]: I1201 09:50:22.501921 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fq449" event={"ID":"f441a07b-df6a-4d0b-b8d5-fec3698ff0d4","Type":"ContainerStarted","Data":"b10e49a9dab4d3132b905595d8b2401fc946229002d603995bd217b753e7a377"} Dec 01 09:50:22 crc kubenswrapper[4933]: I1201 09:50:22.504278 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kxbq6" event={"ID":"3c54760e-eb3b-4ad1-a6ee-3c494878c668","Type":"ContainerStarted","Data":"acca3752b84db03daa2f90419ba0ed8c80ff80663af2cbc657af9a70566ba3b0"} Dec 01 09:50:22 crc kubenswrapper[4933]: I1201 09:50:22.527117 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-wpgrh" podStartSLOduration=3.190735754 podStartE2EDuration="9.527096044s" podCreationTimestamp="2025-12-01 09:50:13 +0000 UTC" firstStartedPulling="2025-12-01 09:50:14.76258291 +0000 UTC m=+1105.404306525" lastFinishedPulling="2025-12-01 09:50:21.0989432 +0000 UTC m=+1111.740666815" observedRunningTime="2025-12-01 09:50:22.521498067 +0000 UTC m=+1113.163221692" watchObservedRunningTime="2025-12-01 09:50:22.527096044 +0000 UTC m=+1113.168819659" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.354031 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.529235 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-ovsdbserver-nb\") pod \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.529295 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-config\") pod \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.529391 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-ovsdbserver-sb\") pod \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.529441 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-dns-svc\") pod \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.529561 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrl4p\" (UniqueName: \"kubernetes.io/projected/51b55cac-ddde-4a68-a081-d3e34e4b39fc-kube-api-access-hrl4p\") pod \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\" (UID: \"51b55cac-ddde-4a68-a081-d3e34e4b39fc\") " Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.559729 4933 generic.go:334] "Generic (PLEG): container finished" podID="f441a07b-df6a-4d0b-b8d5-fec3698ff0d4" containerID="343287ae22c6c8430ba36bddf7cad2457d7b37c6427e70f681f59daf4efb35d3" exitCode=0 Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.559830 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fq449" event={"ID":"f441a07b-df6a-4d0b-b8d5-fec3698ff0d4","Type":"ContainerDied","Data":"343287ae22c6c8430ba36bddf7cad2457d7b37c6427e70f681f59daf4efb35d3"} Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.575790 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b55cac-ddde-4a68-a081-d3e34e4b39fc-kube-api-access-hrl4p" (OuterVolumeSpecName: "kube-api-access-hrl4p") pod "51b55cac-ddde-4a68-a081-d3e34e4b39fc" (UID: "51b55cac-ddde-4a68-a081-d3e34e4b39fc"). InnerVolumeSpecName "kube-api-access-hrl4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.579587 4933 generic.go:334] "Generic (PLEG): container finished" podID="51b55cac-ddde-4a68-a081-d3e34e4b39fc" containerID="ca0659e1c20dee73606fea6fffd360466f2cb282ace77ad019ba12c8211b640e" exitCode=0 Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.579629 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fd457" event={"ID":"51b55cac-ddde-4a68-a081-d3e34e4b39fc","Type":"ContainerDied","Data":"ca0659e1c20dee73606fea6fffd360466f2cb282ace77ad019ba12c8211b640e"} Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.579657 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fd457" event={"ID":"51b55cac-ddde-4a68-a081-d3e34e4b39fc","Type":"ContainerDied","Data":"7f192fcd568d0d3c29142e265a6e82ef449edddf97ca15af750d77a0ffcb9c2d"} Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.579675 4933 scope.go:117] "RemoveContainer" containerID="ca0659e1c20dee73606fea6fffd360466f2cb282ace77ad019ba12c8211b640e" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.579830 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fd457" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.651035 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrl4p\" (UniqueName: \"kubernetes.io/projected/51b55cac-ddde-4a68-a081-d3e34e4b39fc-kube-api-access-hrl4p\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.691973 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "51b55cac-ddde-4a68-a081-d3e34e4b39fc" (UID: "51b55cac-ddde-4a68-a081-d3e34e4b39fc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.696876 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51b55cac-ddde-4a68-a081-d3e34e4b39fc" (UID: "51b55cac-ddde-4a68-a081-d3e34e4b39fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.699089 4933 scope.go:117] "RemoveContainer" containerID="138bf4daa34dc6391fc705e4b32af6cb5f0099ba2d148007ac7cc4d4be0ae3cf" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.704032 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "51b55cac-ddde-4a68-a081-d3e34e4b39fc" (UID: "51b55cac-ddde-4a68-a081-d3e34e4b39fc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.717847 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-config" (OuterVolumeSpecName: "config") pod "51b55cac-ddde-4a68-a081-d3e34e4b39fc" (UID: "51b55cac-ddde-4a68-a081-d3e34e4b39fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.729670 4933 scope.go:117] "RemoveContainer" containerID="ca0659e1c20dee73606fea6fffd360466f2cb282ace77ad019ba12c8211b640e" Dec 01 09:50:23 crc kubenswrapper[4933]: E1201 09:50:23.730231 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0659e1c20dee73606fea6fffd360466f2cb282ace77ad019ba12c8211b640e\": container with ID starting with ca0659e1c20dee73606fea6fffd360466f2cb282ace77ad019ba12c8211b640e not found: ID does not exist" containerID="ca0659e1c20dee73606fea6fffd360466f2cb282ace77ad019ba12c8211b640e" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.730277 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0659e1c20dee73606fea6fffd360466f2cb282ace77ad019ba12c8211b640e"} err="failed to get container status \"ca0659e1c20dee73606fea6fffd360466f2cb282ace77ad019ba12c8211b640e\": rpc error: code = NotFound desc = could not find container \"ca0659e1c20dee73606fea6fffd360466f2cb282ace77ad019ba12c8211b640e\": container with ID starting with ca0659e1c20dee73606fea6fffd360466f2cb282ace77ad019ba12c8211b640e not found: ID does not exist" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.730425 4933 scope.go:117] "RemoveContainer" containerID="138bf4daa34dc6391fc705e4b32af6cb5f0099ba2d148007ac7cc4d4be0ae3cf" Dec 01 09:50:23 crc kubenswrapper[4933]: E1201 09:50:23.731568 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"138bf4daa34dc6391fc705e4b32af6cb5f0099ba2d148007ac7cc4d4be0ae3cf\": container with ID starting with 138bf4daa34dc6391fc705e4b32af6cb5f0099ba2d148007ac7cc4d4be0ae3cf not found: ID does not exist" containerID="138bf4daa34dc6391fc705e4b32af6cb5f0099ba2d148007ac7cc4d4be0ae3cf" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.731635 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"138bf4daa34dc6391fc705e4b32af6cb5f0099ba2d148007ac7cc4d4be0ae3cf"} err="failed to get container status \"138bf4daa34dc6391fc705e4b32af6cb5f0099ba2d148007ac7cc4d4be0ae3cf\": rpc error: code = NotFound desc = could not find container \"138bf4daa34dc6391fc705e4b32af6cb5f0099ba2d148007ac7cc4d4be0ae3cf\": container with ID starting with 138bf4daa34dc6391fc705e4b32af6cb5f0099ba2d148007ac7cc4d4be0ae3cf not found: ID does not exist" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.753258 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.753322 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.753335 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.753367 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b55cac-ddde-4a68-a081-d3e34e4b39fc-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.917688 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fd457"] Dec 01 09:50:23 crc kubenswrapper[4933]: I1201 09:50:23.926233 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fd457"] Dec 01 09:50:25 crc kubenswrapper[4933]: I1201 09:50:25.165218 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fq449" Dec 01 09:50:25 crc kubenswrapper[4933]: I1201 09:50:25.283971 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db458\" (UniqueName: \"kubernetes.io/projected/f441a07b-df6a-4d0b-b8d5-fec3698ff0d4-kube-api-access-db458\") pod \"f441a07b-df6a-4d0b-b8d5-fec3698ff0d4\" (UID: \"f441a07b-df6a-4d0b-b8d5-fec3698ff0d4\") " Dec 01 09:50:25 crc kubenswrapper[4933]: I1201 09:50:25.284030 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f441a07b-df6a-4d0b-b8d5-fec3698ff0d4-operator-scripts\") pod \"f441a07b-df6a-4d0b-b8d5-fec3698ff0d4\" (UID: \"f441a07b-df6a-4d0b-b8d5-fec3698ff0d4\") " Dec 01 09:50:25 crc kubenswrapper[4933]: I1201 09:50:25.284909 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f441a07b-df6a-4d0b-b8d5-fec3698ff0d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f441a07b-df6a-4d0b-b8d5-fec3698ff0d4" (UID: "f441a07b-df6a-4d0b-b8d5-fec3698ff0d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:25 crc kubenswrapper[4933]: I1201 09:50:25.289455 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f441a07b-df6a-4d0b-b8d5-fec3698ff0d4-kube-api-access-db458" (OuterVolumeSpecName: "kube-api-access-db458") pod "f441a07b-df6a-4d0b-b8d5-fec3698ff0d4" (UID: "f441a07b-df6a-4d0b-b8d5-fec3698ff0d4"). InnerVolumeSpecName "kube-api-access-db458". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:50:25 crc kubenswrapper[4933]: I1201 09:50:25.295572 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 01 09:50:25 crc kubenswrapper[4933]: I1201 09:50:25.387635 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db458\" (UniqueName: \"kubernetes.io/projected/f441a07b-df6a-4d0b-b8d5-fec3698ff0d4-kube-api-access-db458\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:25 crc kubenswrapper[4933]: I1201 09:50:25.387681 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f441a07b-df6a-4d0b-b8d5-fec3698ff0d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:25 crc kubenswrapper[4933]: I1201 09:50:25.605436 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fq449" event={"ID":"f441a07b-df6a-4d0b-b8d5-fec3698ff0d4","Type":"ContainerDied","Data":"b10e49a9dab4d3132b905595d8b2401fc946229002d603995bd217b753e7a377"} Dec 01 09:50:25 crc kubenswrapper[4933]: I1201 09:50:25.605484 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b10e49a9dab4d3132b905595d8b2401fc946229002d603995bd217b753e7a377" Dec 01 09:50:25 crc kubenswrapper[4933]: I1201 09:50:25.605535 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fq449" Dec 01 09:50:25 crc kubenswrapper[4933]: I1201 09:50:25.686773 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51b55cac-ddde-4a68-a081-d3e34e4b39fc" path="/var/lib/kubelet/pods/51b55cac-ddde-4a68-a081-d3e34e4b39fc/volumes" Dec 01 09:50:28 crc kubenswrapper[4933]: I1201 09:50:28.771235 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:28 crc kubenswrapper[4933]: E1201 09:50:28.771528 4933 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 09:50:28 crc kubenswrapper[4933]: E1201 09:50:28.771782 4933 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 09:50:28 crc kubenswrapper[4933]: E1201 09:50:28.771860 4933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift podName:59f78861-3fff-42c4-9592-4eb047ea6a88 nodeName:}" failed. No retries permitted until 2025-12-01 09:50:44.77183703 +0000 UTC m=+1135.413560665 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift") pod "swift-storage-0" (UID: "59f78861-3fff-42c4-9592-4eb047ea6a88") : configmap "swift-ring-files" not found Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.557333 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5tgrr" podUID="a1f2d08e-94f8-47ec-9e7e-a4722b71b609" containerName="ovn-controller" probeResult="failure" output=< Dec 01 09:50:34 crc kubenswrapper[4933]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 01 09:50:34 crc kubenswrapper[4933]: > Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.572731 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.582366 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-l8bgh" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.807574 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5tgrr-config-ssbl9"] Dec 01 09:50:34 crc kubenswrapper[4933]: E1201 09:50:34.807986 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b55cac-ddde-4a68-a081-d3e34e4b39fc" containerName="dnsmasq-dns" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.808003 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b55cac-ddde-4a68-a081-d3e34e4b39fc" containerName="dnsmasq-dns" Dec 01 09:50:34 crc kubenswrapper[4933]: E1201 09:50:34.808024 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f441a07b-df6a-4d0b-b8d5-fec3698ff0d4" containerName="mariadb-database-create" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.808030 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="f441a07b-df6a-4d0b-b8d5-fec3698ff0d4" containerName="mariadb-database-create" Dec 01 09:50:34 crc kubenswrapper[4933]: E1201 09:50:34.808045 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b55cac-ddde-4a68-a081-d3e34e4b39fc" containerName="init" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.808053 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b55cac-ddde-4a68-a081-d3e34e4b39fc" containerName="init" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.808240 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b55cac-ddde-4a68-a081-d3e34e4b39fc" containerName="dnsmasq-dns" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.808264 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="f441a07b-df6a-4d0b-b8d5-fec3698ff0d4" containerName="mariadb-database-create" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.808942 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.813634 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.832245 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5tgrr-config-ssbl9"] Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.891979 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-log-ovn\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.892131 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42616903-0238-4f13-b0cd-b87324be7f2d-additional-scripts\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.892157 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42616903-0238-4f13-b0cd-b87324be7f2d-scripts\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.892197 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw2qq\" (UniqueName: \"kubernetes.io/projected/42616903-0238-4f13-b0cd-b87324be7f2d-kube-api-access-lw2qq\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.892225 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-run-ovn\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.892291 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-run\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.997413 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-run\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.997515 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-log-ovn\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.997590 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42616903-0238-4f13-b0cd-b87324be7f2d-additional-scripts\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.997609 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42616903-0238-4f13-b0cd-b87324be7f2d-scripts\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.997637 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw2qq\" (UniqueName: \"kubernetes.io/projected/42616903-0238-4f13-b0cd-b87324be7f2d-kube-api-access-lw2qq\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.997663 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-run-ovn\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.997893 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-log-ovn\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.997902 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-run-ovn\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.997988 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-run\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:34 crc kubenswrapper[4933]: I1201 09:50:34.999205 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42616903-0238-4f13-b0cd-b87324be7f2d-additional-scripts\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:35 crc kubenswrapper[4933]: I1201 09:50:35.001624 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42616903-0238-4f13-b0cd-b87324be7f2d-scripts\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:35 crc kubenswrapper[4933]: I1201 09:50:35.021793 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw2qq\" (UniqueName: \"kubernetes.io/projected/42616903-0238-4f13-b0cd-b87324be7f2d-kube-api-access-lw2qq\") pod \"ovn-controller-5tgrr-config-ssbl9\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:35 crc kubenswrapper[4933]: I1201 09:50:35.146532 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:35 crc kubenswrapper[4933]: I1201 09:50:35.702333 4933 generic.go:334] "Generic (PLEG): container finished" podID="3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" containerID="50e61c5cd567cfe70fd9d90579b11db9d8c588d75c47667676368152554b647e" exitCode=0 Dec 01 09:50:35 crc kubenswrapper[4933]: I1201 09:50:35.702676 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec","Type":"ContainerDied","Data":"50e61c5cd567cfe70fd9d90579b11db9d8c588d75c47667676368152554b647e"} Dec 01 09:50:38 crc kubenswrapper[4933]: I1201 09:50:38.808162 4933 generic.go:334] "Generic (PLEG): container finished" podID="fa13e197-3320-4314-86ee-a1b90292ab1d" containerID="648aa6bc24969ed3afd745d565cd3f58bb019b5bcce9e8ce2103d766191dd901" exitCode=0 Dec 01 09:50:38 crc kubenswrapper[4933]: I1201 09:50:38.808337 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wpgrh" event={"ID":"fa13e197-3320-4314-86ee-a1b90292ab1d","Type":"ContainerDied","Data":"648aa6bc24969ed3afd745d565cd3f58bb019b5bcce9e8ce2103d766191dd901"} Dec 01 09:50:39 crc kubenswrapper[4933]: I1201 09:50:39.640423 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5tgrr" podUID="a1f2d08e-94f8-47ec-9e7e-a4722b71b609" containerName="ovn-controller" probeResult="failure" output=< Dec 01 09:50:39 crc kubenswrapper[4933]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 01 09:50:39 crc kubenswrapper[4933]: > Dec 01 09:50:39 crc kubenswrapper[4933]: I1201 09:50:39.819944 4933 generic.go:334] "Generic (PLEG): container finished" podID="b8f90456-f375-447c-8f32-8ca629a28861" containerID="649eb745891b3ba68ed59fafd553564f944a61857c2db3028ded94f18160e91a" exitCode=0 Dec 01 09:50:39 crc kubenswrapper[4933]: I1201 09:50:39.820027 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8f90456-f375-447c-8f32-8ca629a28861","Type":"ContainerDied","Data":"649eb745891b3ba68ed59fafd553564f944a61857c2db3028ded94f18160e91a"} Dec 01 09:50:44 crc kubenswrapper[4933]: E1201 09:50:44.386548 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 01 09:50:44 crc kubenswrapper[4933]: E1201 09:50:44.387539 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv8p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-kxbq6_openstack(3c54760e-eb3b-4ad1-a6ee-3c494878c668): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:50:44 crc kubenswrapper[4933]: E1201 09:50:44.388884 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-kxbq6" podUID="3c54760e-eb3b-4ad1-a6ee-3c494878c668" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.511800 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.604881 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5tgrr" podUID="a1f2d08e-94f8-47ec-9e7e-a4722b71b609" containerName="ovn-controller" probeResult="failure" output=< Dec 01 09:50:44 crc kubenswrapper[4933]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 01 09:50:44 crc kubenswrapper[4933]: > Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.616196 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa13e197-3320-4314-86ee-a1b90292ab1d-scripts\") pod \"fa13e197-3320-4314-86ee-a1b90292ab1d\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.616264 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa13e197-3320-4314-86ee-a1b90292ab1d-ring-data-devices\") pod \"fa13e197-3320-4314-86ee-a1b90292ab1d\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.616411 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b8db\" (UniqueName: \"kubernetes.io/projected/fa13e197-3320-4314-86ee-a1b90292ab1d-kube-api-access-9b8db\") pod \"fa13e197-3320-4314-86ee-a1b90292ab1d\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.616537 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-swiftconf\") pod \"fa13e197-3320-4314-86ee-a1b90292ab1d\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.616578 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-dispersionconf\") pod \"fa13e197-3320-4314-86ee-a1b90292ab1d\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.617782 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa13e197-3320-4314-86ee-a1b90292ab1d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "fa13e197-3320-4314-86ee-a1b90292ab1d" (UID: "fa13e197-3320-4314-86ee-a1b90292ab1d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.618492 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-combined-ca-bundle\") pod \"fa13e197-3320-4314-86ee-a1b90292ab1d\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.618535 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa13e197-3320-4314-86ee-a1b90292ab1d-etc-swift\") pod \"fa13e197-3320-4314-86ee-a1b90292ab1d\" (UID: \"fa13e197-3320-4314-86ee-a1b90292ab1d\") " Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.619323 4933 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa13e197-3320-4314-86ee-a1b90292ab1d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.620284 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa13e197-3320-4314-86ee-a1b90292ab1d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "fa13e197-3320-4314-86ee-a1b90292ab1d" (UID: "fa13e197-3320-4314-86ee-a1b90292ab1d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.634043 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "fa13e197-3320-4314-86ee-a1b90292ab1d" (UID: "fa13e197-3320-4314-86ee-a1b90292ab1d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.646382 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa13e197-3320-4314-86ee-a1b90292ab1d-kube-api-access-9b8db" (OuterVolumeSpecName: "kube-api-access-9b8db") pod "fa13e197-3320-4314-86ee-a1b90292ab1d" (UID: "fa13e197-3320-4314-86ee-a1b90292ab1d"). InnerVolumeSpecName "kube-api-access-9b8db". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.652647 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa13e197-3320-4314-86ee-a1b90292ab1d" (UID: "fa13e197-3320-4314-86ee-a1b90292ab1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.654533 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "fa13e197-3320-4314-86ee-a1b90292ab1d" (UID: "fa13e197-3320-4314-86ee-a1b90292ab1d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.657866 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa13e197-3320-4314-86ee-a1b90292ab1d-scripts" (OuterVolumeSpecName: "scripts") pod "fa13e197-3320-4314-86ee-a1b90292ab1d" (UID: "fa13e197-3320-4314-86ee-a1b90292ab1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.723636 4933 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa13e197-3320-4314-86ee-a1b90292ab1d-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.724235 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa13e197-3320-4314-86ee-a1b90292ab1d-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.724252 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b8db\" (UniqueName: \"kubernetes.io/projected/fa13e197-3320-4314-86ee-a1b90292ab1d-kube-api-access-9b8db\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.724268 4933 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.724281 4933 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.724292 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa13e197-3320-4314-86ee-a1b90292ab1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.803321 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5tgrr-config-ssbl9"] Dec 01 09:50:44 crc kubenswrapper[4933]: W1201 09:50:44.805143 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42616903_0238_4f13_b0cd_b87324be7f2d.slice/crio-fed16e144e3d952e0d286d7c0e89dadad6575f371fe3518928e23d4ee8ff4b80 WatchSource:0}: Error finding container fed16e144e3d952e0d286d7c0e89dadad6575f371fe3518928e23d4ee8ff4b80: Status 404 returned error can't find the container with id fed16e144e3d952e0d286d7c0e89dadad6575f371fe3518928e23d4ee8ff4b80 Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.825681 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.830833 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/59f78861-3fff-42c4-9592-4eb047ea6a88-etc-swift\") pod \"swift-storage-0\" (UID: \"59f78861-3fff-42c4-9592-4eb047ea6a88\") " pod="openstack/swift-storage-0" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.928036 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec","Type":"ContainerStarted","Data":"fa751b0aa63209b66f2cc4715529e50651a08c1e4cf44398f562d58938f15044"} Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.928368 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.929425 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5tgrr-config-ssbl9" event={"ID":"42616903-0238-4f13-b0cd-b87324be7f2d","Type":"ContainerStarted","Data":"fed16e144e3d952e0d286d7c0e89dadad6575f371fe3518928e23d4ee8ff4b80"} Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.932732 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8f90456-f375-447c-8f32-8ca629a28861","Type":"ContainerStarted","Data":"623eb7fa1a62caea1200abf885cf68400135d41dca63d2762217dce664ca47fd"} Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.933212 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.936798 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wpgrh" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.938519 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wpgrh" event={"ID":"fa13e197-3320-4314-86ee-a1b90292ab1d","Type":"ContainerDied","Data":"c588903cb169f0b18c8df3802258305ddd08d41f28822d804cb1221e3150e4a5"} Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.938578 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c588903cb169f0b18c8df3802258305ddd08d41f28822d804cb1221e3150e4a5" Dec 01 09:50:44 crc kubenswrapper[4933]: E1201 09:50:44.939690 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-kxbq6" podUID="3c54760e-eb3b-4ad1-a6ee-3c494878c668" Dec 01 09:50:44 crc kubenswrapper[4933]: I1201 09:50:44.963005 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=46.921537667 podStartE2EDuration="1m19.962986721s" podCreationTimestamp="2025-12-01 09:49:25 +0000 UTC" firstStartedPulling="2025-12-01 09:49:27.634759589 +0000 UTC m=+1058.276483204" lastFinishedPulling="2025-12-01 09:50:00.676208643 +0000 UTC m=+1091.317932258" observedRunningTime="2025-12-01 09:50:44.953701003 +0000 UTC m=+1135.595424628" watchObservedRunningTime="2025-12-01 09:50:44.962986721 +0000 UTC m=+1135.604710336" Dec 01 09:50:45 crc kubenswrapper[4933]: I1201 09:50:45.002987 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 01 09:50:45 crc kubenswrapper[4933]: I1201 09:50:45.020732 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371955.834066 podStartE2EDuration="1m21.020708654s" podCreationTimestamp="2025-12-01 09:49:24 +0000 UTC" firstStartedPulling="2025-12-01 09:49:27.050915291 +0000 UTC m=+1057.692638906" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:50:44.997379963 +0000 UTC m=+1135.639103588" watchObservedRunningTime="2025-12-01 09:50:45.020708654 +0000 UTC m=+1135.662432269" Dec 01 09:50:45 crc kubenswrapper[4933]: W1201 09:50:45.840657 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59f78861_3fff_42c4_9592_4eb047ea6a88.slice/crio-68caa794e02a5b1afbc75d4e879b68498ba1efa599f6e651866268075f041c45 WatchSource:0}: Error finding container 68caa794e02a5b1afbc75d4e879b68498ba1efa599f6e651866268075f041c45: Status 404 returned error can't find the container with id 68caa794e02a5b1afbc75d4e879b68498ba1efa599f6e651866268075f041c45 Dec 01 09:50:45 crc kubenswrapper[4933]: I1201 09:50:45.842891 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 01 09:50:45 crc kubenswrapper[4933]: I1201 09:50:45.959113 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"68caa794e02a5b1afbc75d4e879b68498ba1efa599f6e651866268075f041c45"} Dec 01 09:50:45 crc kubenswrapper[4933]: I1201 09:50:45.965790 4933 generic.go:334] "Generic (PLEG): container finished" podID="42616903-0238-4f13-b0cd-b87324be7f2d" containerID="fe5ac427829548c437732fd34d0acfd8e388d54c19d9f5204a5e27bde57067f2" exitCode=0 Dec 01 09:50:45 crc kubenswrapper[4933]: I1201 09:50:45.966213 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5tgrr-config-ssbl9" event={"ID":"42616903-0238-4f13-b0cd-b87324be7f2d","Type":"ContainerDied","Data":"fe5ac427829548c437732fd34d0acfd8e388d54c19d9f5204a5e27bde57067f2"} Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.449502 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.489177 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42616903-0238-4f13-b0cd-b87324be7f2d-additional-scripts\") pod \"42616903-0238-4f13-b0cd-b87324be7f2d\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.489332 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-run\") pod \"42616903-0238-4f13-b0cd-b87324be7f2d\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.489373 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42616903-0238-4f13-b0cd-b87324be7f2d-scripts\") pod \"42616903-0238-4f13-b0cd-b87324be7f2d\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.489481 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-run-ovn\") pod \"42616903-0238-4f13-b0cd-b87324be7f2d\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.489479 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-run" (OuterVolumeSpecName: "var-run") pod "42616903-0238-4f13-b0cd-b87324be7f2d" (UID: "42616903-0238-4f13-b0cd-b87324be7f2d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.489563 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw2qq\" (UniqueName: \"kubernetes.io/projected/42616903-0238-4f13-b0cd-b87324be7f2d-kube-api-access-lw2qq\") pod \"42616903-0238-4f13-b0cd-b87324be7f2d\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.489599 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-log-ovn\") pod \"42616903-0238-4f13-b0cd-b87324be7f2d\" (UID: \"42616903-0238-4f13-b0cd-b87324be7f2d\") " Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.489610 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "42616903-0238-4f13-b0cd-b87324be7f2d" (UID: "42616903-0238-4f13-b0cd-b87324be7f2d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.489716 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "42616903-0238-4f13-b0cd-b87324be7f2d" (UID: "42616903-0238-4f13-b0cd-b87324be7f2d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.490046 4933 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-run\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.490081 4933 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.490091 4933 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42616903-0238-4f13-b0cd-b87324be7f2d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.490638 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42616903-0238-4f13-b0cd-b87324be7f2d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "42616903-0238-4f13-b0cd-b87324be7f2d" (UID: "42616903-0238-4f13-b0cd-b87324be7f2d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.490854 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42616903-0238-4f13-b0cd-b87324be7f2d-scripts" (OuterVolumeSpecName: "scripts") pod "42616903-0238-4f13-b0cd-b87324be7f2d" (UID: "42616903-0238-4f13-b0cd-b87324be7f2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.498107 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42616903-0238-4f13-b0cd-b87324be7f2d-kube-api-access-lw2qq" (OuterVolumeSpecName: "kube-api-access-lw2qq") pod "42616903-0238-4f13-b0cd-b87324be7f2d" (UID: "42616903-0238-4f13-b0cd-b87324be7f2d"). InnerVolumeSpecName "kube-api-access-lw2qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.592399 4933 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42616903-0238-4f13-b0cd-b87324be7f2d-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.592444 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42616903-0238-4f13-b0cd-b87324be7f2d-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.592455 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw2qq\" (UniqueName: \"kubernetes.io/projected/42616903-0238-4f13-b0cd-b87324be7f2d-kube-api-access-lw2qq\") on node \"crc\" DevicePath \"\"" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.993585 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5tgrr-config-ssbl9" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.994958 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5tgrr-config-ssbl9" event={"ID":"42616903-0238-4f13-b0cd-b87324be7f2d","Type":"ContainerDied","Data":"fed16e144e3d952e0d286d7c0e89dadad6575f371fe3518928e23d4ee8ff4b80"} Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.995004 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fed16e144e3d952e0d286d7c0e89dadad6575f371fe3518928e23d4ee8ff4b80" Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.998245 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"6830440677c47e8e8bdfd7731888d3789fd46999f12b05a2dd011031073a833a"} Dec 01 09:50:47 crc kubenswrapper[4933]: I1201 09:50:47.998294 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"a46208896f92b9bf31564d87c5c74b5de5af48d77b0e815b5a5658c53c93bae2"} Dec 01 09:50:48 crc kubenswrapper[4933]: I1201 09:50:48.845888 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5tgrr-config-ssbl9"] Dec 01 09:50:48 crc kubenswrapper[4933]: I1201 09:50:48.995754 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5tgrr-config-ssbl9"] Dec 01 09:50:49 crc kubenswrapper[4933]: I1201 09:50:49.019694 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"a9f030817d35bad5be663bade0ac31aa7ff2734def9fc615e5145c22317d4236"} Dec 01 09:50:49 crc kubenswrapper[4933]: I1201 09:50:49.019828 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"7e48265fa955e089a75649e529e90d03bcff0e7f68b47c4b187acae9d4ac1e37"} Dec 01 09:50:49 crc kubenswrapper[4933]: I1201 09:50:49.761159 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42616903-0238-4f13-b0cd-b87324be7f2d" path="/var/lib/kubelet/pods/42616903-0238-4f13-b0cd-b87324be7f2d/volumes" Dec 01 09:50:49 crc kubenswrapper[4933]: I1201 09:50:49.798294 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-5tgrr" Dec 01 09:50:54 crc kubenswrapper[4933]: I1201 09:50:54.093883 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"b6f9c9c2800e97cace8c881383e51e2564e34608775eb854af021ab69f074471"} Dec 01 09:50:55 crc kubenswrapper[4933]: I1201 09:50:55.125735 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"1e6ca2e0ce6f57094cd2ca860454140990323bf41226dbaea6aca0b865b91b10"} Dec 01 09:50:55 crc kubenswrapper[4933]: I1201 09:50:55.126334 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"6e3d3ddae7daae1eea06c99e220855419caa9f4d83b17ea5abbd10310d4f7590"} Dec 01 09:50:55 crc kubenswrapper[4933]: I1201 09:50:55.126359 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"557dfa871ef7c43a48dab78ea8a8ac09c4ff698842a2b5ad6cabdc0375fadfab"} Dec 01 09:50:56 crc kubenswrapper[4933]: I1201 09:50:56.375847 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b8f90456-f375-447c-8f32-8ca629a28861" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Dec 01 09:50:56 crc kubenswrapper[4933]: I1201 09:50:56.934660 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.274339 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"087709ce633be40d6fc6e52afec26d3cb83c95cfb83b39c5d0ce94df05fd629e"} Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.540224 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-gb654"] Dec 01 09:50:57 crc kubenswrapper[4933]: E1201 09:50:57.540764 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa13e197-3320-4314-86ee-a1b90292ab1d" containerName="swift-ring-rebalance" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.540809 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa13e197-3320-4314-86ee-a1b90292ab1d" containerName="swift-ring-rebalance" Dec 01 09:50:57 crc kubenswrapper[4933]: E1201 09:50:57.540843 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42616903-0238-4f13-b0cd-b87324be7f2d" containerName="ovn-config" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.540851 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="42616903-0238-4f13-b0cd-b87324be7f2d" containerName="ovn-config" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.548974 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="42616903-0238-4f13-b0cd-b87324be7f2d" containerName="ovn-config" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.549034 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa13e197-3320-4314-86ee-a1b90292ab1d" containerName="swift-ring-rebalance" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.549950 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gb654" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.552282 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gb654"] Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.633897 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a0de276-0552-49b4-a7ef-ee46a6a07983-operator-scripts\") pod \"cinder-db-create-gb654\" (UID: \"0a0de276-0552-49b4-a7ef-ee46a6a07983\") " pod="openstack/cinder-db-create-gb654" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.634170 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cds9s\" (UniqueName: \"kubernetes.io/projected/0a0de276-0552-49b4-a7ef-ee46a6a07983-kube-api-access-cds9s\") pod \"cinder-db-create-gb654\" (UID: \"0a0de276-0552-49b4-a7ef-ee46a6a07983\") " pod="openstack/cinder-db-create-gb654" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.684234 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-j9kzb"] Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.694788 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j9kzb" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.736100 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cds9s\" (UniqueName: \"kubernetes.io/projected/0a0de276-0552-49b4-a7ef-ee46a6a07983-kube-api-access-cds9s\") pod \"cinder-db-create-gb654\" (UID: \"0a0de276-0552-49b4-a7ef-ee46a6a07983\") " pod="openstack/cinder-db-create-gb654" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.736208 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a0de276-0552-49b4-a7ef-ee46a6a07983-operator-scripts\") pod \"cinder-db-create-gb654\" (UID: \"0a0de276-0552-49b4-a7ef-ee46a6a07983\") " pod="openstack/cinder-db-create-gb654" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.738791 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a0de276-0552-49b4-a7ef-ee46a6a07983-operator-scripts\") pod \"cinder-db-create-gb654\" (UID: \"0a0de276-0552-49b4-a7ef-ee46a6a07983\") " pod="openstack/cinder-db-create-gb654" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.742514 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-j9kzb"] Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.772006 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-ca4e-account-create-update-sjbgj"] Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.778689 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ca4e-account-create-update-sjbgj" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.787574 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.790556 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ca4e-account-create-update-sjbgj"] Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.839033 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np984\" (UniqueName: \"kubernetes.io/projected/3c64f8f7-73b4-41da-848d-48951c88da96-kube-api-access-np984\") pod \"barbican-ca4e-account-create-update-sjbgj\" (UID: \"3c64f8f7-73b4-41da-848d-48951c88da96\") " pod="openstack/barbican-ca4e-account-create-update-sjbgj" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.839178 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c7186d-9c43-48a2-baf8-67143842715e-operator-scripts\") pod \"barbican-db-create-j9kzb\" (UID: \"67c7186d-9c43-48a2-baf8-67143842715e\") " pod="openstack/barbican-db-create-j9kzb" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.839255 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c64f8f7-73b4-41da-848d-48951c88da96-operator-scripts\") pod \"barbican-ca4e-account-create-update-sjbgj\" (UID: \"3c64f8f7-73b4-41da-848d-48951c88da96\") " pod="openstack/barbican-ca4e-account-create-update-sjbgj" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.839334 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5h47\" (UniqueName: \"kubernetes.io/projected/67c7186d-9c43-48a2-baf8-67143842715e-kube-api-access-n5h47\") pod \"barbican-db-create-j9kzb\" (UID: \"67c7186d-9c43-48a2-baf8-67143842715e\") " pod="openstack/barbican-db-create-j9kzb" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.894563 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cds9s\" (UniqueName: \"kubernetes.io/projected/0a0de276-0552-49b4-a7ef-ee46a6a07983-kube-api-access-cds9s\") pod \"cinder-db-create-gb654\" (UID: \"0a0de276-0552-49b4-a7ef-ee46a6a07983\") " pod="openstack/cinder-db-create-gb654" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.896885 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-158d-account-create-update-27q24"] Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.899817 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-158d-account-create-update-27q24" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.923740 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.932162 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-158d-account-create-update-27q24"] Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.946679 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gb654" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.948508 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5h47\" (UniqueName: \"kubernetes.io/projected/67c7186d-9c43-48a2-baf8-67143842715e-kube-api-access-n5h47\") pod \"barbican-db-create-j9kzb\" (UID: \"67c7186d-9c43-48a2-baf8-67143842715e\") " pod="openstack/barbican-db-create-j9kzb" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.948587 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np984\" (UniqueName: \"kubernetes.io/projected/3c64f8f7-73b4-41da-848d-48951c88da96-kube-api-access-np984\") pod \"barbican-ca4e-account-create-update-sjbgj\" (UID: \"3c64f8f7-73b4-41da-848d-48951c88da96\") " pod="openstack/barbican-ca4e-account-create-update-sjbgj" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.948665 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c7186d-9c43-48a2-baf8-67143842715e-operator-scripts\") pod \"barbican-db-create-j9kzb\" (UID: \"67c7186d-9c43-48a2-baf8-67143842715e\") " pod="openstack/barbican-db-create-j9kzb" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.948707 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c64f8f7-73b4-41da-848d-48951c88da96-operator-scripts\") pod \"barbican-ca4e-account-create-update-sjbgj\" (UID: \"3c64f8f7-73b4-41da-848d-48951c88da96\") " pod="openstack/barbican-ca4e-account-create-update-sjbgj" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.949733 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c64f8f7-73b4-41da-848d-48951c88da96-operator-scripts\") pod \"barbican-ca4e-account-create-update-sjbgj\" (UID: \"3c64f8f7-73b4-41da-848d-48951c88da96\") " pod="openstack/barbican-ca4e-account-create-update-sjbgj" Dec 01 09:50:57 crc kubenswrapper[4933]: I1201 09:50:57.950971 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c7186d-9c43-48a2-baf8-67143842715e-operator-scripts\") pod \"barbican-db-create-j9kzb\" (UID: \"67c7186d-9c43-48a2-baf8-67143842715e\") " pod="openstack/barbican-db-create-j9kzb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.035852 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np984\" (UniqueName: \"kubernetes.io/projected/3c64f8f7-73b4-41da-848d-48951c88da96-kube-api-access-np984\") pod \"barbican-ca4e-account-create-update-sjbgj\" (UID: \"3c64f8f7-73b4-41da-848d-48951c88da96\") " pod="openstack/barbican-ca4e-account-create-update-sjbgj" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.042340 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4j5nb"] Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.044362 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4j5nb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.048864 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.049240 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mbpv6" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.049476 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.049721 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.050842 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpgg9\" (UniqueName: \"kubernetes.io/projected/64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df-kube-api-access-rpgg9\") pod \"cinder-158d-account-create-update-27q24\" (UID: \"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df\") " pod="openstack/cinder-158d-account-create-update-27q24" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.050916 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df-operator-scripts\") pod \"cinder-158d-account-create-update-27q24\" (UID: \"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df\") " pod="openstack/cinder-158d-account-create-update-27q24" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.074369 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4j5nb"] Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.106616 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5h47\" (UniqueName: \"kubernetes.io/projected/67c7186d-9c43-48a2-baf8-67143842715e-kube-api-access-n5h47\") pod \"barbican-db-create-j9kzb\" (UID: \"67c7186d-9c43-48a2-baf8-67143842715e\") " pod="openstack/barbican-db-create-j9kzb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.128623 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ca4e-account-create-update-sjbgj" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.162172 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpgg9\" (UniqueName: \"kubernetes.io/projected/64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df-kube-api-access-rpgg9\") pod \"cinder-158d-account-create-update-27q24\" (UID: \"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df\") " pod="openstack/cinder-158d-account-create-update-27q24" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.162256 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df-operator-scripts\") pod \"cinder-158d-account-create-update-27q24\" (UID: \"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df\") " pod="openstack/cinder-158d-account-create-update-27q24" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.162340 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-config-data\") pod \"keystone-db-sync-4j5nb\" (UID: \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\") " pod="openstack/keystone-db-sync-4j5nb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.162367 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrzpz\" (UniqueName: \"kubernetes.io/projected/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-kube-api-access-rrzpz\") pod \"keystone-db-sync-4j5nb\" (UID: \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\") " pod="openstack/keystone-db-sync-4j5nb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.162411 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-combined-ca-bundle\") pod \"keystone-db-sync-4j5nb\" (UID: \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\") " pod="openstack/keystone-db-sync-4j5nb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.163892 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df-operator-scripts\") pod \"cinder-158d-account-create-update-27q24\" (UID: \"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df\") " pod="openstack/cinder-158d-account-create-update-27q24" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.197221 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-xvxdh"] Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.208706 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xvxdh" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.254542 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xvxdh"] Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.260288 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpgg9\" (UniqueName: \"kubernetes.io/projected/64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df-kube-api-access-rpgg9\") pod \"cinder-158d-account-create-update-27q24\" (UID: \"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df\") " pod="openstack/cinder-158d-account-create-update-27q24" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.264123 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-config-data\") pod \"keystone-db-sync-4j5nb\" (UID: \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\") " pod="openstack/keystone-db-sync-4j5nb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.264190 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrzpz\" (UniqueName: \"kubernetes.io/projected/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-kube-api-access-rrzpz\") pod \"keystone-db-sync-4j5nb\" (UID: \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\") " pod="openstack/keystone-db-sync-4j5nb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.264245 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-combined-ca-bundle\") pod \"keystone-db-sync-4j5nb\" (UID: \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\") " pod="openstack/keystone-db-sync-4j5nb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.275214 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-combined-ca-bundle\") pod \"keystone-db-sync-4j5nb\" (UID: \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\") " pod="openstack/keystone-db-sync-4j5nb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.276373 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-config-data\") pod \"keystone-db-sync-4j5nb\" (UID: \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\") " pod="openstack/keystone-db-sync-4j5nb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.330839 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j9kzb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.340170 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrzpz\" (UniqueName: \"kubernetes.io/projected/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-kube-api-access-rrzpz\") pod \"keystone-db-sync-4j5nb\" (UID: \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\") " pod="openstack/keystone-db-sync-4j5nb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.346692 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-e124-account-create-update-jhpxj"] Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.348035 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e124-account-create-update-jhpxj" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.353013 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.363495 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"08defd38bbe6197e3f458094349133cb13974bb05f96e24737158dbb2cded9aa"} Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.375865 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5083456d-b59d-4697-b8b2-c11158ee75fa-operator-scripts\") pod \"neutron-db-create-xvxdh\" (UID: \"5083456d-b59d-4697-b8b2-c11158ee75fa\") " pod="openstack/neutron-db-create-xvxdh" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.376157 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdg5p\" (UniqueName: \"kubernetes.io/projected/5083456d-b59d-4697-b8b2-c11158ee75fa-kube-api-access-vdg5p\") pod \"neutron-db-create-xvxdh\" (UID: \"5083456d-b59d-4697-b8b2-c11158ee75fa\") " pod="openstack/neutron-db-create-xvxdh" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.396051 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e124-account-create-update-jhpxj"] Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.466977 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-158d-account-create-update-27q24" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.497517 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdg5p\" (UniqueName: \"kubernetes.io/projected/5083456d-b59d-4697-b8b2-c11158ee75fa-kube-api-access-vdg5p\") pod \"neutron-db-create-xvxdh\" (UID: \"5083456d-b59d-4697-b8b2-c11158ee75fa\") " pod="openstack/neutron-db-create-xvxdh" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.497631 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8whn\" (UniqueName: \"kubernetes.io/projected/19d82604-29a2-4fa3-8a04-0c0f456dc62b-kube-api-access-x8whn\") pod \"neutron-e124-account-create-update-jhpxj\" (UID: \"19d82604-29a2-4fa3-8a04-0c0f456dc62b\") " pod="openstack/neutron-e124-account-create-update-jhpxj" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.497696 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19d82604-29a2-4fa3-8a04-0c0f456dc62b-operator-scripts\") pod \"neutron-e124-account-create-update-jhpxj\" (UID: \"19d82604-29a2-4fa3-8a04-0c0f456dc62b\") " pod="openstack/neutron-e124-account-create-update-jhpxj" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.497753 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5083456d-b59d-4697-b8b2-c11158ee75fa-operator-scripts\") pod \"neutron-db-create-xvxdh\" (UID: \"5083456d-b59d-4697-b8b2-c11158ee75fa\") " pod="openstack/neutron-db-create-xvxdh" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.498886 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5083456d-b59d-4697-b8b2-c11158ee75fa-operator-scripts\") pod \"neutron-db-create-xvxdh\" (UID: \"5083456d-b59d-4697-b8b2-c11158ee75fa\") " pod="openstack/neutron-db-create-xvxdh" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.499617 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4j5nb" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.601720 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8whn\" (UniqueName: \"kubernetes.io/projected/19d82604-29a2-4fa3-8a04-0c0f456dc62b-kube-api-access-x8whn\") pod \"neutron-e124-account-create-update-jhpxj\" (UID: \"19d82604-29a2-4fa3-8a04-0c0f456dc62b\") " pod="openstack/neutron-e124-account-create-update-jhpxj" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.601800 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19d82604-29a2-4fa3-8a04-0c0f456dc62b-operator-scripts\") pod \"neutron-e124-account-create-update-jhpxj\" (UID: \"19d82604-29a2-4fa3-8a04-0c0f456dc62b\") " pod="openstack/neutron-e124-account-create-update-jhpxj" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.603041 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19d82604-29a2-4fa3-8a04-0c0f456dc62b-operator-scripts\") pod \"neutron-e124-account-create-update-jhpxj\" (UID: \"19d82604-29a2-4fa3-8a04-0c0f456dc62b\") " pod="openstack/neutron-e124-account-create-update-jhpxj" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.609517 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdg5p\" (UniqueName: \"kubernetes.io/projected/5083456d-b59d-4697-b8b2-c11158ee75fa-kube-api-access-vdg5p\") pod \"neutron-db-create-xvxdh\" (UID: \"5083456d-b59d-4697-b8b2-c11158ee75fa\") " pod="openstack/neutron-db-create-xvxdh" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.649171 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xvxdh" Dec 01 09:50:58 crc kubenswrapper[4933]: I1201 09:50:58.782657 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8whn\" (UniqueName: \"kubernetes.io/projected/19d82604-29a2-4fa3-8a04-0c0f456dc62b-kube-api-access-x8whn\") pod \"neutron-e124-account-create-update-jhpxj\" (UID: \"19d82604-29a2-4fa3-8a04-0c0f456dc62b\") " pod="openstack/neutron-e124-account-create-update-jhpxj" Dec 01 09:50:59 crc kubenswrapper[4933]: I1201 09:50:59.000389 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e124-account-create-update-jhpxj" Dec 01 09:50:59 crc kubenswrapper[4933]: I1201 09:50:59.402405 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kxbq6" event={"ID":"3c54760e-eb3b-4ad1-a6ee-3c494878c668","Type":"ContainerStarted","Data":"2539d04a02ccfdebb2af26f48d9c89f1ed297f931ac0804f4ad11692f0128239"} Dec 01 09:50:59 crc kubenswrapper[4933]: I1201 09:50:59.451455 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"0f23640ab69153a4bd13cd1a0bf5ca38df9fa633ba383c8796ceedb9752f787f"} Dec 01 09:50:59 crc kubenswrapper[4933]: I1201 09:50:59.458667 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kxbq6" podStartSLOduration=4.424549111 podStartE2EDuration="39.458636771s" podCreationTimestamp="2025-12-01 09:50:20 +0000 UTC" firstStartedPulling="2025-12-01 09:50:21.796210306 +0000 UTC m=+1112.437933921" lastFinishedPulling="2025-12-01 09:50:56.830297966 +0000 UTC m=+1147.472021581" observedRunningTime="2025-12-01 09:50:59.443145672 +0000 UTC m=+1150.084869287" watchObservedRunningTime="2025-12-01 09:50:59.458636771 +0000 UTC m=+1150.100360386" Dec 01 09:50:59 crc kubenswrapper[4933]: I1201 09:50:59.697374 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-j9kzb"] Dec 01 09:50:59 crc kubenswrapper[4933]: I1201 09:50:59.721397 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ca4e-account-create-update-sjbgj"] Dec 01 09:50:59 crc kubenswrapper[4933]: I1201 09:50:59.776008 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gb654"] Dec 01 09:50:59 crc kubenswrapper[4933]: W1201 09:50:59.795039 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a0de276_0552_49b4_a7ef_ee46a6a07983.slice/crio-7e5110ffafa980439f9eceeb90c56261ddcd8d61e241cb5413d95096a74419b6 WatchSource:0}: Error finding container 7e5110ffafa980439f9eceeb90c56261ddcd8d61e241cb5413d95096a74419b6: Status 404 returned error can't find the container with id 7e5110ffafa980439f9eceeb90c56261ddcd8d61e241cb5413d95096a74419b6 Dec 01 09:50:59 crc kubenswrapper[4933]: I1201 09:50:59.935263 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xvxdh"] Dec 01 09:50:59 crc kubenswrapper[4933]: W1201 09:50:59.954336 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5083456d_b59d_4697_b8b2_c11158ee75fa.slice/crio-56cda9b34a60c6e9132c8a8de13fa93668378507e52f536604ba6ed42f7088dd WatchSource:0}: Error finding container 56cda9b34a60c6e9132c8a8de13fa93668378507e52f536604ba6ed42f7088dd: Status 404 returned error can't find the container with id 56cda9b34a60c6e9132c8a8de13fa93668378507e52f536604ba6ed42f7088dd Dec 01 09:50:59 crc kubenswrapper[4933]: I1201 09:50:59.976377 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4j5nb"] Dec 01 09:51:00 crc kubenswrapper[4933]: I1201 09:51:00.183357 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-158d-account-create-update-27q24"] Dec 01 09:51:00 crc kubenswrapper[4933]: W1201 09:51:00.184901 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64bc0b91_4fe2_4f5b_9a3a_8f7ec7c254df.slice/crio-e92af4d9f49cb9b41fee44e7b1149ef5ef96c4ad8daa3dd6b15f9fae788f0d51 WatchSource:0}: Error finding container e92af4d9f49cb9b41fee44e7b1149ef5ef96c4ad8daa3dd6b15f9fae788f0d51: Status 404 returned error can't find the container with id e92af4d9f49cb9b41fee44e7b1149ef5ef96c4ad8daa3dd6b15f9fae788f0d51 Dec 01 09:51:00 crc kubenswrapper[4933]: W1201 09:51:00.210992 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19d82604_29a2_4fa3_8a04_0c0f456dc62b.slice/crio-4753c38b1d4a0d67dcfbb1509b446da037bf888654088cc405c85b94ee3d04b6 WatchSource:0}: Error finding container 4753c38b1d4a0d67dcfbb1509b446da037bf888654088cc405c85b94ee3d04b6: Status 404 returned error can't find the container with id 4753c38b1d4a0d67dcfbb1509b446da037bf888654088cc405c85b94ee3d04b6 Dec 01 09:51:00 crc kubenswrapper[4933]: I1201 09:51:00.211457 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e124-account-create-update-jhpxj"] Dec 01 09:51:00 crc kubenswrapper[4933]: I1201 09:51:00.471400 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e124-account-create-update-jhpxj" event={"ID":"19d82604-29a2-4fa3-8a04-0c0f456dc62b","Type":"ContainerStarted","Data":"4753c38b1d4a0d67dcfbb1509b446da037bf888654088cc405c85b94ee3d04b6"} Dec 01 09:51:00 crc kubenswrapper[4933]: I1201 09:51:00.473932 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4j5nb" event={"ID":"aaf6393e-2760-4dc5-8ccf-3079d38a2e87","Type":"ContainerStarted","Data":"f13304e29a300066c3ed09c0206f5d3e0d56a7fcb55965633c87c08d23dcebce"} Dec 01 09:51:00 crc kubenswrapper[4933]: I1201 09:51:00.476359 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ca4e-account-create-update-sjbgj" event={"ID":"3c64f8f7-73b4-41da-848d-48951c88da96","Type":"ContainerStarted","Data":"f7f39a6ed13e0cd919015d9368b996a99c3848340419e715d353ae48150f803f"} Dec 01 09:51:00 crc kubenswrapper[4933]: I1201 09:51:00.481942 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-158d-account-create-update-27q24" event={"ID":"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df","Type":"ContainerStarted","Data":"e92af4d9f49cb9b41fee44e7b1149ef5ef96c4ad8daa3dd6b15f9fae788f0d51"} Dec 01 09:51:00 crc kubenswrapper[4933]: I1201 09:51:00.484541 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j9kzb" event={"ID":"67c7186d-9c43-48a2-baf8-67143842715e","Type":"ContainerStarted","Data":"b4f8925ef583bb446b6e0f9fe06c22455a2cf77ebd7ae452a88aa10519e8529f"} Dec 01 09:51:00 crc kubenswrapper[4933]: I1201 09:51:00.487357 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gb654" event={"ID":"0a0de276-0552-49b4-a7ef-ee46a6a07983","Type":"ContainerStarted","Data":"7e5110ffafa980439f9eceeb90c56261ddcd8d61e241cb5413d95096a74419b6"} Dec 01 09:51:00 crc kubenswrapper[4933]: I1201 09:51:00.490766 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xvxdh" event={"ID":"5083456d-b59d-4697-b8b2-c11158ee75fa","Type":"ContainerStarted","Data":"56cda9b34a60c6e9132c8a8de13fa93668378507e52f536604ba6ed42f7088dd"} Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.539669 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"68177003efcf21349be38825eab1fd7cebf94162b84133fd0c680f360fb40df3"} Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.540708 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"ecd9599888cebc5938b649bcdacc554c3a263f09545c15414014e5588547548b"} Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.544342 4933 generic.go:334] "Generic (PLEG): container finished" podID="0a0de276-0552-49b4-a7ef-ee46a6a07983" containerID="48351fb94199d69c4deec1dafedeb440220f533f734824feec8368cff225885c" exitCode=0 Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.544425 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gb654" event={"ID":"0a0de276-0552-49b4-a7ef-ee46a6a07983","Type":"ContainerDied","Data":"48351fb94199d69c4deec1dafedeb440220f533f734824feec8368cff225885c"} Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.549282 4933 generic.go:334] "Generic (PLEG): container finished" podID="5083456d-b59d-4697-b8b2-c11158ee75fa" containerID="92d551697ed02edbcbee2a7268a083d10ec87b94bf5cee3ee8c7b30c5f053cb8" exitCode=0 Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.549588 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xvxdh" event={"ID":"5083456d-b59d-4697-b8b2-c11158ee75fa","Type":"ContainerDied","Data":"92d551697ed02edbcbee2a7268a083d10ec87b94bf5cee3ee8c7b30c5f053cb8"} Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.551559 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e124-account-create-update-jhpxj" event={"ID":"19d82604-29a2-4fa3-8a04-0c0f456dc62b","Type":"ContainerStarted","Data":"99662f468e975adb9b1fc1368306f487fe8b8df5d9a50c9e5697c58c44d3e6b3"} Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.553742 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ca4e-account-create-update-sjbgj" event={"ID":"3c64f8f7-73b4-41da-848d-48951c88da96","Type":"ContainerStarted","Data":"dc4904d7279d4a2d91ff4304dd1ee59268b823111ef072b5f8edac3493f64589"} Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.558159 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-158d-account-create-update-27q24" event={"ID":"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df","Type":"ContainerStarted","Data":"e56de64b8e858c1e0e2fb440271398280e46ce422fe2dfb45e2a37773bdf974e"} Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.560956 4933 generic.go:334] "Generic (PLEG): container finished" podID="67c7186d-9c43-48a2-baf8-67143842715e" containerID="d10fdbcba439e53f21982e442ee2b24ae602f161378d0673960fa352fff593fd" exitCode=0 Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.561012 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j9kzb" event={"ID":"67c7186d-9c43-48a2-baf8-67143842715e","Type":"ContainerDied","Data":"d10fdbcba439e53f21982e442ee2b24ae602f161378d0673960fa352fff593fd"} Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.601455 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-158d-account-create-update-27q24" podStartSLOduration=6.601427283 podStartE2EDuration="6.601427283s" podCreationTimestamp="2025-12-01 09:50:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:03.59600042 +0000 UTC m=+1154.237724035" watchObservedRunningTime="2025-12-01 09:51:03.601427283 +0000 UTC m=+1154.243150888" Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.624865 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-ca4e-account-create-update-sjbgj" podStartSLOduration=6.624835996 podStartE2EDuration="6.624835996s" podCreationTimestamp="2025-12-01 09:50:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:03.614530664 +0000 UTC m=+1154.256254289" watchObservedRunningTime="2025-12-01 09:51:03.624835996 +0000 UTC m=+1154.266559611" Dec 01 09:51:03 crc kubenswrapper[4933]: I1201 09:51:03.683823 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-e124-account-create-update-jhpxj" podStartSLOduration=5.68379437 podStartE2EDuration="5.68379437s" podCreationTimestamp="2025-12-01 09:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:03.677884695 +0000 UTC m=+1154.319608320" watchObservedRunningTime="2025-12-01 09:51:03.68379437 +0000 UTC m=+1154.325517985" Dec 01 09:51:04 crc kubenswrapper[4933]: I1201 09:51:04.585756 4933 generic.go:334] "Generic (PLEG): container finished" podID="64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df" containerID="e56de64b8e858c1e0e2fb440271398280e46ce422fe2dfb45e2a37773bdf974e" exitCode=0 Dec 01 09:51:04 crc kubenswrapper[4933]: I1201 09:51:04.585839 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-158d-account-create-update-27q24" event={"ID":"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df","Type":"ContainerDied","Data":"e56de64b8e858c1e0e2fb440271398280e46ce422fe2dfb45e2a37773bdf974e"} Dec 01 09:51:04 crc kubenswrapper[4933]: I1201 09:51:04.612086 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"a2f23b79f1435d1489a46388587e92b02cd9d93d341ab5ea4327e0f5a442b350"} Dec 01 09:51:04 crc kubenswrapper[4933]: I1201 09:51:04.612150 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"59f78861-3fff-42c4-9592-4eb047ea6a88","Type":"ContainerStarted","Data":"8521ad807615d61800fa4ef13c598cee9be07608657d585342ab32700457c527"} Dec 01 09:51:04 crc kubenswrapper[4933]: I1201 09:51:04.615003 4933 generic.go:334] "Generic (PLEG): container finished" podID="19d82604-29a2-4fa3-8a04-0c0f456dc62b" containerID="99662f468e975adb9b1fc1368306f487fe8b8df5d9a50c9e5697c58c44d3e6b3" exitCode=0 Dec 01 09:51:04 crc kubenswrapper[4933]: I1201 09:51:04.615108 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e124-account-create-update-jhpxj" event={"ID":"19d82604-29a2-4fa3-8a04-0c0f456dc62b","Type":"ContainerDied","Data":"99662f468e975adb9b1fc1368306f487fe8b8df5d9a50c9e5697c58c44d3e6b3"} Dec 01 09:51:04 crc kubenswrapper[4933]: I1201 09:51:04.618409 4933 generic.go:334] "Generic (PLEG): container finished" podID="3c64f8f7-73b4-41da-848d-48951c88da96" containerID="dc4904d7279d4a2d91ff4304dd1ee59268b823111ef072b5f8edac3493f64589" exitCode=0 Dec 01 09:51:04 crc kubenswrapper[4933]: I1201 09:51:04.618779 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ca4e-account-create-update-sjbgj" event={"ID":"3c64f8f7-73b4-41da-848d-48951c88da96","Type":"ContainerDied","Data":"dc4904d7279d4a2d91ff4304dd1ee59268b823111ef072b5f8edac3493f64589"} Dec 01 09:51:04 crc kubenswrapper[4933]: I1201 09:51:04.676055 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=42.691367758 podStartE2EDuration="53.676015198s" podCreationTimestamp="2025-12-01 09:50:11 +0000 UTC" firstStartedPulling="2025-12-01 09:50:45.843672168 +0000 UTC m=+1136.485395783" lastFinishedPulling="2025-12-01 09:50:56.828319608 +0000 UTC m=+1147.470043223" observedRunningTime="2025-12-01 09:51:04.660527439 +0000 UTC m=+1155.302251074" watchObservedRunningTime="2025-12-01 09:51:04.676015198 +0000 UTC m=+1155.317738813" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.077040 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x8znt"] Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.088816 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.092285 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.102583 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x8znt"] Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.203603 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.203668 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.203696 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-config\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.203753 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.203816 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-dns-svc\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.203849 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jts4s\" (UniqueName: \"kubernetes.io/projected/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-kube-api-access-jts4s\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.305215 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.305287 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.305375 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-config\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.305434 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.305496 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-dns-svc\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.305531 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jts4s\" (UniqueName: \"kubernetes.io/projected/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-kube-api-access-jts4s\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.307488 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-dns-svc\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.307583 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-config\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.308243 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.308982 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.310849 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.343867 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jts4s\" (UniqueName: \"kubernetes.io/projected/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-kube-api-access-jts4s\") pod \"dnsmasq-dns-764c5664d7-x8znt\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:05 crc kubenswrapper[4933]: I1201 09:51:05.425375 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:06 crc kubenswrapper[4933]: I1201 09:51:06.376607 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.668865 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gb654" event={"ID":"0a0de276-0552-49b4-a7ef-ee46a6a07983","Type":"ContainerDied","Data":"7e5110ffafa980439f9eceeb90c56261ddcd8d61e241cb5413d95096a74419b6"} Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.669425 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e5110ffafa980439f9eceeb90c56261ddcd8d61e241cb5413d95096a74419b6" Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.872267 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gb654" Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.878106 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j9kzb" Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.958580 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xvxdh" Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.967187 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-158d-account-create-update-27q24" Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.977643 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e124-account-create-update-jhpxj" Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.985343 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c7186d-9c43-48a2-baf8-67143842715e-operator-scripts\") pod \"67c7186d-9c43-48a2-baf8-67143842715e\" (UID: \"67c7186d-9c43-48a2-baf8-67143842715e\") " Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.985433 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cds9s\" (UniqueName: \"kubernetes.io/projected/0a0de276-0552-49b4-a7ef-ee46a6a07983-kube-api-access-cds9s\") pod \"0a0de276-0552-49b4-a7ef-ee46a6a07983\" (UID: \"0a0de276-0552-49b4-a7ef-ee46a6a07983\") " Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.985486 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5h47\" (UniqueName: \"kubernetes.io/projected/67c7186d-9c43-48a2-baf8-67143842715e-kube-api-access-n5h47\") pod \"67c7186d-9c43-48a2-baf8-67143842715e\" (UID: \"67c7186d-9c43-48a2-baf8-67143842715e\") " Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.985624 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a0de276-0552-49b4-a7ef-ee46a6a07983-operator-scripts\") pod \"0a0de276-0552-49b4-a7ef-ee46a6a07983\" (UID: \"0a0de276-0552-49b4-a7ef-ee46a6a07983\") " Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.987642 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a0de276-0552-49b4-a7ef-ee46a6a07983-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a0de276-0552-49b4-a7ef-ee46a6a07983" (UID: "0a0de276-0552-49b4-a7ef-ee46a6a07983"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.988189 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67c7186d-9c43-48a2-baf8-67143842715e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67c7186d-9c43-48a2-baf8-67143842715e" (UID: "67c7186d-9c43-48a2-baf8-67143842715e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:08 crc kubenswrapper[4933]: I1201 09:51:08.995550 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0de276-0552-49b4-a7ef-ee46a6a07983-kube-api-access-cds9s" (OuterVolumeSpecName: "kube-api-access-cds9s") pod "0a0de276-0552-49b4-a7ef-ee46a6a07983" (UID: "0a0de276-0552-49b4-a7ef-ee46a6a07983"). InnerVolumeSpecName "kube-api-access-cds9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.001708 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c7186d-9c43-48a2-baf8-67143842715e-kube-api-access-n5h47" (OuterVolumeSpecName: "kube-api-access-n5h47") pod "67c7186d-9c43-48a2-baf8-67143842715e" (UID: "67c7186d-9c43-48a2-baf8-67143842715e"). InnerVolumeSpecName "kube-api-access-n5h47". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.013483 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ca4e-account-create-update-sjbgj" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.087900 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpgg9\" (UniqueName: \"kubernetes.io/projected/64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df-kube-api-access-rpgg9\") pod \"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df\" (UID: \"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df\") " Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.088013 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5083456d-b59d-4697-b8b2-c11158ee75fa-operator-scripts\") pod \"5083456d-b59d-4697-b8b2-c11158ee75fa\" (UID: \"5083456d-b59d-4697-b8b2-c11158ee75fa\") " Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.088075 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19d82604-29a2-4fa3-8a04-0c0f456dc62b-operator-scripts\") pod \"19d82604-29a2-4fa3-8a04-0c0f456dc62b\" (UID: \"19d82604-29a2-4fa3-8a04-0c0f456dc62b\") " Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.088101 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8whn\" (UniqueName: \"kubernetes.io/projected/19d82604-29a2-4fa3-8a04-0c0f456dc62b-kube-api-access-x8whn\") pod \"19d82604-29a2-4fa3-8a04-0c0f456dc62b\" (UID: \"19d82604-29a2-4fa3-8a04-0c0f456dc62b\") " Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.088174 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df-operator-scripts\") pod \"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df\" (UID: \"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df\") " Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.088357 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdg5p\" (UniqueName: \"kubernetes.io/projected/5083456d-b59d-4697-b8b2-c11158ee75fa-kube-api-access-vdg5p\") pod \"5083456d-b59d-4697-b8b2-c11158ee75fa\" (UID: \"5083456d-b59d-4697-b8b2-c11158ee75fa\") " Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.088756 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c7186d-9c43-48a2-baf8-67143842715e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.088779 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cds9s\" (UniqueName: \"kubernetes.io/projected/0a0de276-0552-49b4-a7ef-ee46a6a07983-kube-api-access-cds9s\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.088792 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5h47\" (UniqueName: \"kubernetes.io/projected/67c7186d-9c43-48a2-baf8-67143842715e-kube-api-access-n5h47\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.088802 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a0de276-0552-49b4-a7ef-ee46a6a07983-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.089444 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19d82604-29a2-4fa3-8a04-0c0f456dc62b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19d82604-29a2-4fa3-8a04-0c0f456dc62b" (UID: "19d82604-29a2-4fa3-8a04-0c0f456dc62b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.089935 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df" (UID: "64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.090633 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5083456d-b59d-4697-b8b2-c11158ee75fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5083456d-b59d-4697-b8b2-c11158ee75fa" (UID: "5083456d-b59d-4697-b8b2-c11158ee75fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.090983 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df-kube-api-access-rpgg9" (OuterVolumeSpecName: "kube-api-access-rpgg9") pod "64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df" (UID: "64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df"). InnerVolumeSpecName "kube-api-access-rpgg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.094507 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5083456d-b59d-4697-b8b2-c11158ee75fa-kube-api-access-vdg5p" (OuterVolumeSpecName: "kube-api-access-vdg5p") pod "5083456d-b59d-4697-b8b2-c11158ee75fa" (UID: "5083456d-b59d-4697-b8b2-c11158ee75fa"). InnerVolumeSpecName "kube-api-access-vdg5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.096029 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d82604-29a2-4fa3-8a04-0c0f456dc62b-kube-api-access-x8whn" (OuterVolumeSpecName: "kube-api-access-x8whn") pod "19d82604-29a2-4fa3-8a04-0c0f456dc62b" (UID: "19d82604-29a2-4fa3-8a04-0c0f456dc62b"). InnerVolumeSpecName "kube-api-access-x8whn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.190239 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c64f8f7-73b4-41da-848d-48951c88da96-operator-scripts\") pod \"3c64f8f7-73b4-41da-848d-48951c88da96\" (UID: \"3c64f8f7-73b4-41da-848d-48951c88da96\") " Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.190321 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np984\" (UniqueName: \"kubernetes.io/projected/3c64f8f7-73b4-41da-848d-48951c88da96-kube-api-access-np984\") pod \"3c64f8f7-73b4-41da-848d-48951c88da96\" (UID: \"3c64f8f7-73b4-41da-848d-48951c88da96\") " Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.190898 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpgg9\" (UniqueName: \"kubernetes.io/projected/64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df-kube-api-access-rpgg9\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.190930 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5083456d-b59d-4697-b8b2-c11158ee75fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.190945 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19d82604-29a2-4fa3-8a04-0c0f456dc62b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.190995 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8whn\" (UniqueName: \"kubernetes.io/projected/19d82604-29a2-4fa3-8a04-0c0f456dc62b-kube-api-access-x8whn\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.191005 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.191014 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdg5p\" (UniqueName: \"kubernetes.io/projected/5083456d-b59d-4697-b8b2-c11158ee75fa-kube-api-access-vdg5p\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.192083 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c64f8f7-73b4-41da-848d-48951c88da96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c64f8f7-73b4-41da-848d-48951c88da96" (UID: "3c64f8f7-73b4-41da-848d-48951c88da96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.194420 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c64f8f7-73b4-41da-848d-48951c88da96-kube-api-access-np984" (OuterVolumeSpecName: "kube-api-access-np984") pod "3c64f8f7-73b4-41da-848d-48951c88da96" (UID: "3c64f8f7-73b4-41da-848d-48951c88da96"). InnerVolumeSpecName "kube-api-access-np984". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.293501 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c64f8f7-73b4-41da-848d-48951c88da96-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.293559 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np984\" (UniqueName: \"kubernetes.io/projected/3c64f8f7-73b4-41da-848d-48951c88da96-kube-api-access-np984\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.364863 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x8znt"] Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.692636 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e124-account-create-update-jhpxj" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.699622 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" event={"ID":"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c","Type":"ContainerStarted","Data":"3d52f2bf451b59a7ad8ad9c43068cb7de10802b7ae9ea5549e0d215eb145c7f4"} Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.699680 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" event={"ID":"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c","Type":"ContainerStarted","Data":"7cbdd976db4aa10f08c59801dd44094900bf5dd0bc971408e2ee065652ec4072"} Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.699696 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e124-account-create-update-jhpxj" event={"ID":"19d82604-29a2-4fa3-8a04-0c0f456dc62b","Type":"ContainerDied","Data":"4753c38b1d4a0d67dcfbb1509b446da037bf888654088cc405c85b94ee3d04b6"} Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.699716 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4753c38b1d4a0d67dcfbb1509b446da037bf888654088cc405c85b94ee3d04b6" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.699728 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4j5nb" event={"ID":"aaf6393e-2760-4dc5-8ccf-3079d38a2e87","Type":"ContainerStarted","Data":"bc2ded9eac3aa46c05ac5e11fb2d0a4ca60d881c0c72604f9a5445d716136467"} Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.706472 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ca4e-account-create-update-sjbgj" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.706482 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ca4e-account-create-update-sjbgj" event={"ID":"3c64f8f7-73b4-41da-848d-48951c88da96","Type":"ContainerDied","Data":"f7f39a6ed13e0cd919015d9368b996a99c3848340419e715d353ae48150f803f"} Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.706533 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7f39a6ed13e0cd919015d9368b996a99c3848340419e715d353ae48150f803f" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.719948 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-158d-account-create-update-27q24" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.720006 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-158d-account-create-update-27q24" event={"ID":"64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df","Type":"ContainerDied","Data":"e92af4d9f49cb9b41fee44e7b1149ef5ef96c4ad8daa3dd6b15f9fae788f0d51"} Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.720063 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e92af4d9f49cb9b41fee44e7b1149ef5ef96c4ad8daa3dd6b15f9fae788f0d51" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.731012 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j9kzb" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.731009 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j9kzb" event={"ID":"67c7186d-9c43-48a2-baf8-67143842715e","Type":"ContainerDied","Data":"b4f8925ef583bb446b6e0f9fe06c22455a2cf77ebd7ae452a88aa10519e8529f"} Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.731095 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4f8925ef583bb446b6e0f9fe06c22455a2cf77ebd7ae452a88aa10519e8529f" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.735255 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xvxdh" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.735429 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xvxdh" event={"ID":"5083456d-b59d-4697-b8b2-c11158ee75fa","Type":"ContainerDied","Data":"56cda9b34a60c6e9132c8a8de13fa93668378507e52f536604ba6ed42f7088dd"} Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.735517 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56cda9b34a60c6e9132c8a8de13fa93668378507e52f536604ba6ed42f7088dd" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.735296 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gb654" Dec 01 09:51:09 crc kubenswrapper[4933]: I1201 09:51:09.771710 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4j5nb" podStartSLOduration=3.932521804 podStartE2EDuration="12.771687344s" podCreationTimestamp="2025-12-01 09:50:57 +0000 UTC" firstStartedPulling="2025-12-01 09:51:00.02701235 +0000 UTC m=+1150.668735965" lastFinishedPulling="2025-12-01 09:51:08.86617789 +0000 UTC m=+1159.507901505" observedRunningTime="2025-12-01 09:51:09.752524046 +0000 UTC m=+1160.394247681" watchObservedRunningTime="2025-12-01 09:51:09.771687344 +0000 UTC m=+1160.413410959" Dec 01 09:51:10 crc kubenswrapper[4933]: I1201 09:51:10.749225 4933 generic.go:334] "Generic (PLEG): container finished" podID="d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" containerID="3d52f2bf451b59a7ad8ad9c43068cb7de10802b7ae9ea5549e0d215eb145c7f4" exitCode=0 Dec 01 09:51:10 crc kubenswrapper[4933]: I1201 09:51:10.750514 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" event={"ID":"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c","Type":"ContainerDied","Data":"3d52f2bf451b59a7ad8ad9c43068cb7de10802b7ae9ea5549e0d215eb145c7f4"} Dec 01 09:51:11 crc kubenswrapper[4933]: I1201 09:51:11.762738 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" event={"ID":"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c","Type":"ContainerStarted","Data":"a24fba64383147fc7a1656e65e6b946f639999db286e843b3fe3b1a387c4359b"} Dec 01 09:51:11 crc kubenswrapper[4933]: I1201 09:51:11.763273 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:11 crc kubenswrapper[4933]: I1201 09:51:11.785812 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" podStartSLOduration=6.785789678 podStartE2EDuration="6.785789678s" podCreationTimestamp="2025-12-01 09:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:11.783873531 +0000 UTC m=+1162.425597156" watchObservedRunningTime="2025-12-01 09:51:11.785789678 +0000 UTC m=+1162.427513293" Dec 01 09:51:13 crc kubenswrapper[4933]: I1201 09:51:13.785251 4933 generic.go:334] "Generic (PLEG): container finished" podID="aaf6393e-2760-4dc5-8ccf-3079d38a2e87" containerID="bc2ded9eac3aa46c05ac5e11fb2d0a4ca60d881c0c72604f9a5445d716136467" exitCode=0 Dec 01 09:51:13 crc kubenswrapper[4933]: I1201 09:51:13.785369 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4j5nb" event={"ID":"aaf6393e-2760-4dc5-8ccf-3079d38a2e87","Type":"ContainerDied","Data":"bc2ded9eac3aa46c05ac5e11fb2d0a4ca60d881c0c72604f9a5445d716136467"} Dec 01 09:51:14 crc kubenswrapper[4933]: I1201 09:51:14.798476 4933 generic.go:334] "Generic (PLEG): container finished" podID="3c54760e-eb3b-4ad1-a6ee-3c494878c668" containerID="2539d04a02ccfdebb2af26f48d9c89f1ed297f931ac0804f4ad11692f0128239" exitCode=0 Dec 01 09:51:14 crc kubenswrapper[4933]: I1201 09:51:14.798578 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kxbq6" event={"ID":"3c54760e-eb3b-4ad1-a6ee-3c494878c668","Type":"ContainerDied","Data":"2539d04a02ccfdebb2af26f48d9c89f1ed297f931ac0804f4ad11692f0128239"} Dec 01 09:51:15 crc kubenswrapper[4933]: I1201 09:51:15.132179 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4j5nb" Dec 01 09:51:15 crc kubenswrapper[4933]: I1201 09:51:15.220137 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-combined-ca-bundle\") pod \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\" (UID: \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\") " Dec 01 09:51:15 crc kubenswrapper[4933]: I1201 09:51:15.220189 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-config-data\") pod \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\" (UID: \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\") " Dec 01 09:51:15 crc kubenswrapper[4933]: I1201 09:51:15.220283 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrzpz\" (UniqueName: \"kubernetes.io/projected/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-kube-api-access-rrzpz\") pod \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\" (UID: \"aaf6393e-2760-4dc5-8ccf-3079d38a2e87\") " Dec 01 09:51:15 crc kubenswrapper[4933]: I1201 09:51:15.227612 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-kube-api-access-rrzpz" (OuterVolumeSpecName: "kube-api-access-rrzpz") pod "aaf6393e-2760-4dc5-8ccf-3079d38a2e87" (UID: "aaf6393e-2760-4dc5-8ccf-3079d38a2e87"). InnerVolumeSpecName "kube-api-access-rrzpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:15 crc kubenswrapper[4933]: I1201 09:51:15.260097 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaf6393e-2760-4dc5-8ccf-3079d38a2e87" (UID: "aaf6393e-2760-4dc5-8ccf-3079d38a2e87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:15 crc kubenswrapper[4933]: I1201 09:51:15.284507 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-config-data" (OuterVolumeSpecName: "config-data") pod "aaf6393e-2760-4dc5-8ccf-3079d38a2e87" (UID: "aaf6393e-2760-4dc5-8ccf-3079d38a2e87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:15 crc kubenswrapper[4933]: I1201 09:51:15.323441 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:15 crc kubenswrapper[4933]: I1201 09:51:15.323502 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:15 crc kubenswrapper[4933]: I1201 09:51:15.323516 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrzpz\" (UniqueName: \"kubernetes.io/projected/aaf6393e-2760-4dc5-8ccf-3079d38a2e87-kube-api-access-rrzpz\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:15 crc kubenswrapper[4933]: I1201 09:51:15.809250 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4j5nb" Dec 01 09:51:15 crc kubenswrapper[4933]: I1201 09:51:15.809257 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4j5nb" event={"ID":"aaf6393e-2760-4dc5-8ccf-3079d38a2e87","Type":"ContainerDied","Data":"f13304e29a300066c3ed09c0206f5d3e0d56a7fcb55965633c87c08d23dcebce"} Dec 01 09:51:15 crc kubenswrapper[4933]: I1201 09:51:15.810615 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f13304e29a300066c3ed09c0206f5d3e0d56a7fcb55965633c87c08d23dcebce" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.119248 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xv6k6"] Dec 01 09:51:16 crc kubenswrapper[4933]: E1201 09:51:16.123367 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d82604-29a2-4fa3-8a04-0c0f456dc62b" containerName="mariadb-account-create-update" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123390 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d82604-29a2-4fa3-8a04-0c0f456dc62b" containerName="mariadb-account-create-update" Dec 01 09:51:16 crc kubenswrapper[4933]: E1201 09:51:16.123409 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c64f8f7-73b4-41da-848d-48951c88da96" containerName="mariadb-account-create-update" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123419 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c64f8f7-73b4-41da-848d-48951c88da96" containerName="mariadb-account-create-update" Dec 01 09:51:16 crc kubenswrapper[4933]: E1201 09:51:16.123429 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0de276-0552-49b4-a7ef-ee46a6a07983" containerName="mariadb-database-create" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123435 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0de276-0552-49b4-a7ef-ee46a6a07983" containerName="mariadb-database-create" Dec 01 09:51:16 crc kubenswrapper[4933]: E1201 09:51:16.123459 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c7186d-9c43-48a2-baf8-67143842715e" containerName="mariadb-database-create" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123464 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c7186d-9c43-48a2-baf8-67143842715e" containerName="mariadb-database-create" Dec 01 09:51:16 crc kubenswrapper[4933]: E1201 09:51:16.123479 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5083456d-b59d-4697-b8b2-c11158ee75fa" containerName="mariadb-database-create" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123486 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="5083456d-b59d-4697-b8b2-c11158ee75fa" containerName="mariadb-database-create" Dec 01 09:51:16 crc kubenswrapper[4933]: E1201 09:51:16.123496 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df" containerName="mariadb-account-create-update" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123502 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df" containerName="mariadb-account-create-update" Dec 01 09:51:16 crc kubenswrapper[4933]: E1201 09:51:16.123510 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf6393e-2760-4dc5-8ccf-3079d38a2e87" containerName="keystone-db-sync" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123516 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf6393e-2760-4dc5-8ccf-3079d38a2e87" containerName="keystone-db-sync" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123722 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf6393e-2760-4dc5-8ccf-3079d38a2e87" containerName="keystone-db-sync" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123743 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d82604-29a2-4fa3-8a04-0c0f456dc62b" containerName="mariadb-account-create-update" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123751 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df" containerName="mariadb-account-create-update" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123763 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="5083456d-b59d-4697-b8b2-c11158ee75fa" containerName="mariadb-database-create" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123775 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c7186d-9c43-48a2-baf8-67143842715e" containerName="mariadb-database-create" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123781 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c64f8f7-73b4-41da-848d-48951c88da96" containerName="mariadb-account-create-update" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.123790 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0de276-0552-49b4-a7ef-ee46a6a07983" containerName="mariadb-database-create" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.124623 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.134892 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mbpv6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.135141 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.135286 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.135687 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.143450 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.147637 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x8znt"] Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.148090 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" podUID="d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" containerName="dnsmasq-dns" containerID="cri-o://a24fba64383147fc7a1656e65e6b946f639999db286e843b3fe3b1a387c4359b" gracePeriod=10 Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.153488 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.185125 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xv6k6"] Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.244398 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-combined-ca-bundle\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.244474 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-scripts\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.244521 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-fernet-keys\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.244544 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-config-data\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.244576 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l24r\" (UniqueName: \"kubernetes.io/projected/e925628e-0a4e-4893-b74d-7cd76160d44e-kube-api-access-4l24r\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.244595 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-credential-keys\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.316204 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-pd6bl"] Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.321805 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.372445 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-combined-ca-bundle\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.372598 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-scripts\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.372745 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-fernet-keys\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.372780 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-config-data\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.372848 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l24r\" (UniqueName: \"kubernetes.io/projected/e925628e-0a4e-4893-b74d-7cd76160d44e-kube-api-access-4l24r\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.372874 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-credential-keys\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.392126 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-fernet-keys\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.393173 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-combined-ca-bundle\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.396350 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-scripts\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.406544 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-config-data\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.416604 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-credential-keys\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.451982 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-pd6bl"] Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.462207 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l24r\" (UniqueName: \"kubernetes.io/projected/e925628e-0a4e-4893-b74d-7cd76160d44e-kube-api-access-4l24r\") pod \"keystone-bootstrap-xv6k6\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.476878 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.476987 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrnsg\" (UniqueName: \"kubernetes.io/projected/cd8b66b5-8ebe-4c29-8b02-a1889908b981-kube-api-access-xrnsg\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.477016 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.477085 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-dns-svc\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.477120 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-config\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.477152 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.482907 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.515371 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-74ddcb8d87-g7ljk"] Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.517298 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.529114 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-br9km" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.529666 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.537027 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.537297 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.556830 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74ddcb8d87-g7ljk"] Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.581168 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1eb99423-f394-4f95-9279-13f68e394e4f-horizon-secret-key\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.581394 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrnsg\" (UniqueName: \"kubernetes.io/projected/cd8b66b5-8ebe-4c29-8b02-a1889908b981-kube-api-access-xrnsg\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.581477 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.581539 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f4cd\" (UniqueName: \"kubernetes.io/projected/1eb99423-f394-4f95-9279-13f68e394e4f-kube-api-access-9f4cd\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.581606 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1eb99423-f394-4f95-9279-13f68e394e4f-config-data\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.581698 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-dns-svc\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.581759 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-config\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.581816 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1eb99423-f394-4f95-9279-13f68e394e4f-logs\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.581869 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.581918 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1eb99423-f394-4f95-9279-13f68e394e4f-scripts\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.582102 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.583837 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-dns-svc\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.584275 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.584576 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.594289 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-config\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.594928 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.607437 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-ltlmj"] Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.609175 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ltlmj" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.616837 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.623844 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4n754" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.624197 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.648282 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrnsg\" (UniqueName: \"kubernetes.io/projected/cd8b66b5-8ebe-4c29-8b02-a1889908b981-kube-api-access-xrnsg\") pod \"dnsmasq-dns-5959f8865f-pd6bl\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.650946 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.682773 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-qw69v"] Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.683686 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1eb99423-f394-4f95-9279-13f68e394e4f-logs\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.683753 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1eb99423-f394-4f95-9279-13f68e394e4f-scripts\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.683837 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np5hh\" (UniqueName: \"kubernetes.io/projected/c4531da4-441c-4003-9f20-719853edb0b4-kube-api-access-np5hh\") pod \"neutron-db-sync-ltlmj\" (UID: \"c4531da4-441c-4003-9f20-719853edb0b4\") " pod="openstack/neutron-db-sync-ltlmj" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.683871 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1eb99423-f394-4f95-9279-13f68e394e4f-horizon-secret-key\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.683913 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f4cd\" (UniqueName: \"kubernetes.io/projected/1eb99423-f394-4f95-9279-13f68e394e4f-kube-api-access-9f4cd\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.683940 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1eb99423-f394-4f95-9279-13f68e394e4f-config-data\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.683963 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4531da4-441c-4003-9f20-719853edb0b4-config\") pod \"neutron-db-sync-ltlmj\" (UID: \"c4531da4-441c-4003-9f20-719853edb0b4\") " pod="openstack/neutron-db-sync-ltlmj" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.683986 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4531da4-441c-4003-9f20-719853edb0b4-combined-ca-bundle\") pod \"neutron-db-sync-ltlmj\" (UID: \"c4531da4-441c-4003-9f20-719853edb0b4\") " pod="openstack/neutron-db-sync-ltlmj" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.684035 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qw69v" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.684634 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1eb99423-f394-4f95-9279-13f68e394e4f-logs\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.685181 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1eb99423-f394-4f95-9279-13f68e394e4f-scripts\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.689481 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-s2wkp" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.689722 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.692456 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kxbq6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.694619 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1eb99423-f394-4f95-9279-13f68e394e4f-config-data\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.700738 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1eb99423-f394-4f95-9279-13f68e394e4f-horizon-secret-key\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.720254 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ltlmj"] Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.765885 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f4cd\" (UniqueName: \"kubernetes.io/projected/1eb99423-f394-4f95-9279-13f68e394e4f-kube-api-access-9f4cd\") pod \"horizon-74ddcb8d87-g7ljk\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.780059 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-n76m8"] Dec 01 09:51:16 crc kubenswrapper[4933]: E1201 09:51:16.780584 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c54760e-eb3b-4ad1-a6ee-3c494878c668" containerName="glance-db-sync" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.780602 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c54760e-eb3b-4ad1-a6ee-3c494878c668" containerName="glance-db-sync" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.780778 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c54760e-eb3b-4ad1-a6ee-3c494878c668" containerName="glance-db-sync" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.781434 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.784627 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.785023 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.786001 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9964x" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.786253 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-combined-ca-bundle\") pod \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.789502 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-config-data\") pod \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.789567 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv8p2\" (UniqueName: \"kubernetes.io/projected/3c54760e-eb3b-4ad1-a6ee-3c494878c668-kube-api-access-xv8p2\") pod \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.789612 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-db-sync-config-data\") pod \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\" (UID: \"3c54760e-eb3b-4ad1-a6ee-3c494878c668\") " Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.790017 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4531da4-441c-4003-9f20-719853edb0b4-combined-ca-bundle\") pod \"neutron-db-sync-ltlmj\" (UID: \"c4531da4-441c-4003-9f20-719853edb0b4\") " pod="openstack/neutron-db-sync-ltlmj" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.790057 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f89d96-ceb5-4012-9273-68d00cb0780b-combined-ca-bundle\") pod \"barbican-db-sync-qw69v\" (UID: \"82f89d96-ceb5-4012-9273-68d00cb0780b\") " pod="openstack/barbican-db-sync-qw69v" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.790162 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82f89d96-ceb5-4012-9273-68d00cb0780b-db-sync-config-data\") pod \"barbican-db-sync-qw69v\" (UID: \"82f89d96-ceb5-4012-9273-68d00cb0780b\") " pod="openstack/barbican-db-sync-qw69v" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.790272 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss9td\" (UniqueName: \"kubernetes.io/projected/82f89d96-ceb5-4012-9273-68d00cb0780b-kube-api-access-ss9td\") pod \"barbican-db-sync-qw69v\" (UID: \"82f89d96-ceb5-4012-9273-68d00cb0780b\") " pod="openstack/barbican-db-sync-qw69v" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.790320 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np5hh\" (UniqueName: \"kubernetes.io/projected/c4531da4-441c-4003-9f20-719853edb0b4-kube-api-access-np5hh\") pod \"neutron-db-sync-ltlmj\" (UID: \"c4531da4-441c-4003-9f20-719853edb0b4\") " pod="openstack/neutron-db-sync-ltlmj" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.790501 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4531da4-441c-4003-9f20-719853edb0b4-config\") pod \"neutron-db-sync-ltlmj\" (UID: \"c4531da4-441c-4003-9f20-719853edb0b4\") " pod="openstack/neutron-db-sync-ltlmj" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.802779 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4531da4-441c-4003-9f20-719853edb0b4-config\") pod \"neutron-db-sync-ltlmj\" (UID: \"c4531da4-441c-4003-9f20-719853edb0b4\") " pod="openstack/neutron-db-sync-ltlmj" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.823450 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4531da4-441c-4003-9f20-719853edb0b4-combined-ca-bundle\") pod \"neutron-db-sync-ltlmj\" (UID: \"c4531da4-441c-4003-9f20-719853edb0b4\") " pod="openstack/neutron-db-sync-ltlmj" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.823630 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3c54760e-eb3b-4ad1-a6ee-3c494878c668" (UID: "3c54760e-eb3b-4ad1-a6ee-3c494878c668"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.824791 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qw69v"] Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.880644 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c54760e-eb3b-4ad1-a6ee-3c494878c668-kube-api-access-xv8p2" (OuterVolumeSpecName: "kube-api-access-xv8p2") pod "3c54760e-eb3b-4ad1-a6ee-3c494878c668" (UID: "3c54760e-eb3b-4ad1-a6ee-3c494878c668"). InnerVolumeSpecName "kube-api-access-xv8p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.880861 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n76m8"] Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.884330 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np5hh\" (UniqueName: \"kubernetes.io/projected/c4531da4-441c-4003-9f20-719853edb0b4-kube-api-access-np5hh\") pod \"neutron-db-sync-ltlmj\" (UID: \"c4531da4-441c-4003-9f20-719853edb0b4\") " pod="openstack/neutron-db-sync-ltlmj" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.894691 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82f89d96-ceb5-4012-9273-68d00cb0780b-db-sync-config-data\") pod \"barbican-db-sync-qw69v\" (UID: \"82f89d96-ceb5-4012-9273-68d00cb0780b\") " pod="openstack/barbican-db-sync-qw69v" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.894768 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-scripts\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.894792 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-config-data\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.894811 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-db-sync-config-data\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.894834 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss9td\" (UniqueName: \"kubernetes.io/projected/82f89d96-ceb5-4012-9273-68d00cb0780b-kube-api-access-ss9td\") pod \"barbican-db-sync-qw69v\" (UID: \"82f89d96-ceb5-4012-9273-68d00cb0780b\") " pod="openstack/barbican-db-sync-qw69v" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.894892 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q29tl\" (UniqueName: \"kubernetes.io/projected/4d3e5dc3-470a-4fa0-b17c-733457329c79-kube-api-access-q29tl\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.894931 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d3e5dc3-470a-4fa0-b17c-733457329c79-etc-machine-id\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.894963 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f89d96-ceb5-4012-9273-68d00cb0780b-combined-ca-bundle\") pod \"barbican-db-sync-qw69v\" (UID: \"82f89d96-ceb5-4012-9273-68d00cb0780b\") " pod="openstack/barbican-db-sync-qw69v" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.894995 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-combined-ca-bundle\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.895050 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv8p2\" (UniqueName: \"kubernetes.io/projected/3c54760e-eb3b-4ad1-a6ee-3c494878c668-kube-api-access-xv8p2\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.895063 4933 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.908700 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kxbq6" event={"ID":"3c54760e-eb3b-4ad1-a6ee-3c494878c668","Type":"ContainerDied","Data":"acca3752b84db03daa2f90419ba0ed8c80ff80663af2cbc657af9a70566ba3b0"} Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.908979 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acca3752b84db03daa2f90419ba0ed8c80ff80663af2cbc657af9a70566ba3b0" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.908840 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kxbq6" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.908729 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82f89d96-ceb5-4012-9273-68d00cb0780b-db-sync-config-data\") pod \"barbican-db-sync-qw69v\" (UID: \"82f89d96-ceb5-4012-9273-68d00cb0780b\") " pod="openstack/barbican-db-sync-qw69v" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.917116 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f89d96-ceb5-4012-9273-68d00cb0780b-combined-ca-bundle\") pod \"barbican-db-sync-qw69v\" (UID: \"82f89d96-ceb5-4012-9273-68d00cb0780b\") " pod="openstack/barbican-db-sync-qw69v" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.979608 4933 generic.go:334] "Generic (PLEG): container finished" podID="d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" containerID="a24fba64383147fc7a1656e65e6b946f639999db286e843b3fe3b1a387c4359b" exitCode=0 Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.980146 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" event={"ID":"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c","Type":"ContainerDied","Data":"a24fba64383147fc7a1656e65e6b946f639999db286e843b3fe3b1a387c4359b"} Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.997933 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q29tl\" (UniqueName: \"kubernetes.io/projected/4d3e5dc3-470a-4fa0-b17c-733457329c79-kube-api-access-q29tl\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.998048 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d3e5dc3-470a-4fa0-b17c-733457329c79-etc-machine-id\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.998130 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-combined-ca-bundle\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.998265 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-scripts\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.998296 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-config-data\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:16 crc kubenswrapper[4933]: I1201 09:51:16.998440 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-db-sync-config-data\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.006849 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.008840 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d3e5dc3-470a-4fa0-b17c-733457329c79-etc-machine-id\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.012916 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-scripts\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.021660 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss9td\" (UniqueName: \"kubernetes.io/projected/82f89d96-ceb5-4012-9273-68d00cb0780b-kube-api-access-ss9td\") pod \"barbican-db-sync-qw69v\" (UID: \"82f89d96-ceb5-4012-9273-68d00cb0780b\") " pod="openstack/barbican-db-sync-qw69v" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.021979 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-db-sync-config-data\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.024241 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-combined-ca-bundle\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.024438 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-config-data" (OuterVolumeSpecName: "config-data") pod "3c54760e-eb3b-4ad1-a6ee-3c494878c668" (UID: "3c54760e-eb3b-4ad1-a6ee-3c494878c668"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.032570 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-config-data\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.033626 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ltlmj" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.052671 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q29tl\" (UniqueName: \"kubernetes.io/projected/4d3e5dc3-470a-4fa0-b17c-733457329c79-kube-api-access-q29tl\") pod \"cinder-db-sync-n76m8\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.053360 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6fc68bdc87-867bw"] Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.055170 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.064747 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qw69v" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.065608 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c54760e-eb3b-4ad1-a6ee-3c494878c668" (UID: "3c54760e-eb3b-4ad1-a6ee-3c494878c668"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.084490 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6fc68bdc87-867bw"] Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.100932 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.100999 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c54760e-eb3b-4ad1-a6ee-3c494878c668-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.140468 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8lwh6"] Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.142681 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.151141 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9pw84" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.151474 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.151498 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.178996 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8lwh6"] Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.185951 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n76m8" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.207866 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-config-data\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.207925 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-horizon-secret-key\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.207956 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-logs\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.208017 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-scripts\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.208051 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4c47\" (UniqueName: \"kubernetes.io/projected/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-kube-api-access-f4c47\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.269523 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-pd6bl"] Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.280404 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-gq26q"] Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.282652 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.311438 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-config-data\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.311501 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-horizon-secret-key\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.311573 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-logs\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.311634 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7853c81a-e365-473f-a4a7-4fcc87f625cd-logs\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.311691 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-scripts\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.311715 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4c47\" (UniqueName: \"kubernetes.io/projected/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-kube-api-access-f4c47\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.311773 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-scripts\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.311795 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-config-data\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.311889 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-combined-ca-bundle\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.311916 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqcdz\" (UniqueName: \"kubernetes.io/projected/7853c81a-e365-473f-a4a7-4fcc87f625cd-kube-api-access-bqcdz\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.313426 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-logs\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.313851 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-scripts\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.313961 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-config-data\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.313986 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-gq26q"] Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.323269 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-horizon-secret-key\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.367566 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4c47\" (UniqueName: \"kubernetes.io/projected/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-kube-api-access-f4c47\") pod \"horizon-6fc68bdc87-867bw\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.397257 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.399803 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.412150 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.413741 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-scripts\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.413774 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-config-data\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.413811 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.413870 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-combined-ca-bundle\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.413903 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqcdz\" (UniqueName: \"kubernetes.io/projected/7853c81a-e365-473f-a4a7-4fcc87f625cd-kube-api-access-bqcdz\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.413946 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.414001 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.414041 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld8cr\" (UniqueName: \"kubernetes.io/projected/6e7cbead-369f-4781-979b-1751fea8561d-kube-api-access-ld8cr\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.414083 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-config\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.414130 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7853c81a-e365-473f-a4a7-4fcc87f625cd-logs\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.414157 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.414995 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.420552 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-config-data\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.420983 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.423322 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7853c81a-e365-473f-a4a7-4fcc87f625cd-logs\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.426075 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-scripts\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.433275 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-combined-ca-bundle\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.520904 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.552505 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.552577 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.552602 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-scripts\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.552639 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld8cr\" (UniqueName: \"kubernetes.io/projected/6e7cbead-369f-4781-979b-1751fea8561d-kube-api-access-ld8cr\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.552659 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-config-data\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.552727 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-config\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.552854 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.553062 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.554099 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.526232 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqcdz\" (UniqueName: \"kubernetes.io/projected/7853c81a-e365-473f-a4a7-4fcc87f625cd-kube-api-access-bqcdz\") pod \"placement-db-sync-8lwh6\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.531116 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.554749 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.555705 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-config\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.556270 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.623017 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld8cr\" (UniqueName: \"kubernetes.io/projected/6e7cbead-369f-4781-979b-1751fea8561d-kube-api-access-ld8cr\") pod \"dnsmasq-dns-58dd9ff6bc-gq26q\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.680647 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s6wm\" (UniqueName: \"kubernetes.io/projected/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-kube-api-access-7s6wm\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.680707 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.680732 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-scripts\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.680759 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-config-data\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.680777 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-run-httpd\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.680793 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-log-httpd\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.680825 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.689801 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-scripts\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.690233 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-config-data\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.706834 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.771826 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.775399 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.782209 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.782652 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s6wm\" (UniqueName: \"kubernetes.io/projected/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-kube-api-access-7s6wm\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.782750 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-run-httpd\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.782771 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-log-httpd\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.782795 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.784339 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-run-httpd\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.784997 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-log-httpd\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.818131 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s6wm\" (UniqueName: \"kubernetes.io/projected/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-kube-api-access-7s6wm\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.818208 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xv6k6"] Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.837517 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.849222 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-gq26q"] Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.850033 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.863146 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ft7zj"] Dec 01 09:51:17 crc kubenswrapper[4933]: E1201 09:51:17.863564 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" containerName="dnsmasq-dns" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.863578 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" containerName="dnsmasq-dns" Dec 01 09:51:17 crc kubenswrapper[4933]: E1201 09:51:17.863602 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" containerName="init" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.863607 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" containerName="init" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.863788 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" containerName="dnsmasq-dns" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.865032 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.873971 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ft7zj"] Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.888268 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-config\") pod \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.888366 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-ovsdbserver-nb\") pod \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.888510 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jts4s\" (UniqueName: \"kubernetes.io/projected/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-kube-api-access-jts4s\") pod \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.888563 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-dns-svc\") pod \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.888674 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-dns-swift-storage-0\") pod \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.888743 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-ovsdbserver-sb\") pod \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\" (UID: \"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c\") " Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.921289 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-kube-api-access-jts4s" (OuterVolumeSpecName: "kube-api-access-jts4s") pod "d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" (UID: "d22c4dcf-9fbf-430c-b820-268fa4a4eb2c"). InnerVolumeSpecName "kube-api-access-jts4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.931223 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-pd6bl"] Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.938545 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.977710 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" (UID: "d22c4dcf-9fbf-430c-b820-268fa4a4eb2c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.987486 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" (UID: "d22c4dcf-9fbf-430c-b820-268fa4a4eb2c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.991841 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfkzh\" (UniqueName: \"kubernetes.io/projected/ec93f2f8-8c54-443a-9560-4002b2b36a2c-kube-api-access-pfkzh\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.991936 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.992090 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.992198 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-config\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.992438 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.992497 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.992620 4933 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.992637 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:17 crc kubenswrapper[4933]: I1201 09:51:17.992650 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jts4s\" (UniqueName: \"kubernetes.io/projected/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-kube-api-access-jts4s\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.003226 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" event={"ID":"d22c4dcf-9fbf-430c-b820-268fa4a4eb2c","Type":"ContainerDied","Data":"7cbdd976db4aa10f08c59801dd44094900bf5dd0bc971408e2ee065652ec4072"} Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.003283 4933 scope.go:117] "RemoveContainer" containerID="a24fba64383147fc7a1656e65e6b946f639999db286e843b3fe3b1a387c4359b" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.003424 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-x8znt" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.004143 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" (UID: "d22c4dcf-9fbf-430c-b820-268fa4a4eb2c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.008422 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xv6k6" event={"ID":"e925628e-0a4e-4893-b74d-7cd76160d44e","Type":"ContainerStarted","Data":"c8c24d122becaec36c26ba51b95805c4e9b71e56c64081cb8a92f853ebe53a2d"} Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.017514 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-config" (OuterVolumeSpecName: "config") pod "d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" (UID: "d22c4dcf-9fbf-430c-b820-268fa4a4eb2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.043925 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" (UID: "d22c4dcf-9fbf-430c-b820-268fa4a4eb2c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.046199 4933 scope.go:117] "RemoveContainer" containerID="3d52f2bf451b59a7ad8ad9c43068cb7de10802b7ae9ea5549e0d215eb145c7f4" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.094008 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.094081 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.094121 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-config\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.094175 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.094204 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.094258 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfkzh\" (UniqueName: \"kubernetes.io/projected/ec93f2f8-8c54-443a-9560-4002b2b36a2c-kube-api-access-pfkzh\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.094328 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.094346 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.095353 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.096816 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.097518 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.101476 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-config\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.101685 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:18 crc kubenswrapper[4933]: W1201 09:51:18.107667 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd8b66b5_8ebe_4c29_8b02_a1889908b981.slice/crio-95e43689bfff35da1a34fcd7f26f457b98b07488f1c1eefa056eff82124d4caf WatchSource:0}: Error finding container 95e43689bfff35da1a34fcd7f26f457b98b07488f1c1eefa056eff82124d4caf: Status 404 returned error can't find the container with id 95e43689bfff35da1a34fcd7f26f457b98b07488f1c1eefa056eff82124d4caf Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.116676 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.142695 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfkzh\" (UniqueName: \"kubernetes.io/projected/ec93f2f8-8c54-443a-9560-4002b2b36a2c-kube-api-access-pfkzh\") pod \"dnsmasq-dns-785d8bcb8c-ft7zj\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.361864 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.382086 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74ddcb8d87-g7ljk"] Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.393546 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.396667 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.405865 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.406457 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-shxmv" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.409568 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 01 09:51:18 crc kubenswrapper[4933]: W1201 09:51:18.424773 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eb99423_f394_4f95_9279_13f68e394e4f.slice/crio-102a6c5d762746d777d0b08b286dc6c161a409c936f6074918fd7266b0ea517d WatchSource:0}: Error finding container 102a6c5d762746d777d0b08b286dc6c161a409c936f6074918fd7266b0ea517d: Status 404 returned error can't find the container with id 102a6c5d762746d777d0b08b286dc6c161a409c936f6074918fd7266b0ea517d Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.429675 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.471365 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ltlmj"] Dec 01 09:51:18 crc kubenswrapper[4933]: W1201 09:51:18.479575 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4531da4_441c_4003_9f20_719853edb0b4.slice/crio-7e7454e32a4438a5d84b7fbc0fb04c9e1198ee597a8fe3e6d498425796a02f1a WatchSource:0}: Error finding container 7e7454e32a4438a5d84b7fbc0fb04c9e1198ee597a8fe3e6d498425796a02f1a: Status 404 returned error can't find the container with id 7e7454e32a4438a5d84b7fbc0fb04c9e1198ee597a8fe3e6d498425796a02f1a Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.509845 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8004752c-1593-4d70-908a-aa63dd3a18b9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.510235 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-scripts\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.510341 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg487\" (UniqueName: \"kubernetes.io/projected/8004752c-1593-4d70-908a-aa63dd3a18b9-kube-api-access-tg487\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.510540 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-config-data\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.510624 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.510731 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.510858 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8004752c-1593-4d70-908a-aa63dd3a18b9-logs\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.615273 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-scripts\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.615413 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg487\" (UniqueName: \"kubernetes.io/projected/8004752c-1593-4d70-908a-aa63dd3a18b9-kube-api-access-tg487\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.615484 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-config-data\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.615511 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.615546 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.616486 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8004752c-1593-4d70-908a-aa63dd3a18b9-logs\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.616575 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8004752c-1593-4d70-908a-aa63dd3a18b9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.618264 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8004752c-1593-4d70-908a-aa63dd3a18b9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.619908 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.624139 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8004752c-1593-4d70-908a-aa63dd3a18b9-logs\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.630162 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-config-data\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.643761 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x8znt"] Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.648608 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.655007 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-scripts\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.670439 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x8znt"] Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.703530 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6fc68bdc87-867bw"] Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.711275 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg487\" (UniqueName: \"kubernetes.io/projected/8004752c-1593-4d70-908a-aa63dd3a18b9-kube-api-access-tg487\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.755817 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n76m8"] Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.793039 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.802419 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.803570 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.821922 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.824455 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qw69v"] Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.848210 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.868894 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-gq26q"] Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.920396 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.941650 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35322d28-daef-4619-bd63-2baaa9ae46bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.941789 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.941897 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbwh7\" (UniqueName: \"kubernetes.io/projected/35322d28-daef-4619-bd63-2baaa9ae46bd-kube-api-access-hbwh7\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.941962 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.942004 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35322d28-daef-4619-bd63-2baaa9ae46bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.942061 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:18 crc kubenswrapper[4933]: I1201 09:51:18.942095 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.044354 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.044395 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.044419 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35322d28-daef-4619-bd63-2baaa9ae46bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.044464 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.044538 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbwh7\" (UniqueName: \"kubernetes.io/projected/35322d28-daef-4619-bd63-2baaa9ae46bd-kube-api-access-hbwh7\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.044580 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.044618 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35322d28-daef-4619-bd63-2baaa9ae46bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.045349 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35322d28-daef-4619-bd63-2baaa9ae46bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.046466 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35322d28-daef-4619-bd63-2baaa9ae46bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.046792 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.062082 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8lwh6"] Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.062753 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.067944 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.069069 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.069299 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbwh7\" (UniqueName: \"kubernetes.io/projected/35322d28-daef-4619-bd63-2baaa9ae46bd-kube-api-access-hbwh7\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.093135 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.096935 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fc68bdc87-867bw" event={"ID":"c09596af-78c2-4f79-8f0b-121ef7f9ef9a","Type":"ContainerStarted","Data":"b71ba3e5d3edca4e9f822adb2f477cd21a48ac18e581e96211b5b01080450bbc"} Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.103371 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ltlmj" event={"ID":"c4531da4-441c-4003-9f20-719853edb0b4","Type":"ContainerStarted","Data":"0649f1c6942f179c383337224226162a54f82fdbc301ffc50fafa75a579626ee"} Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.103428 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ltlmj" event={"ID":"c4531da4-441c-4003-9f20-719853edb0b4","Type":"ContainerStarted","Data":"7e7454e32a4438a5d84b7fbc0fb04c9e1198ee597a8fe3e6d498425796a02f1a"} Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.106288 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.117472 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74ddcb8d87-g7ljk" event={"ID":"1eb99423-f394-4f95-9279-13f68e394e4f","Type":"ContainerStarted","Data":"102a6c5d762746d777d0b08b286dc6c161a409c936f6074918fd7266b0ea517d"} Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.126806 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qw69v" event={"ID":"82f89d96-ceb5-4012-9273-68d00cb0780b","Type":"ContainerStarted","Data":"fabc982509c61e3344eef97e45289e067122877c103cfdd7da8a511c97fdd3b7"} Dec 01 09:51:19 crc kubenswrapper[4933]: W1201 09:51:19.126821 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45a1fb8d_e0ee_4470_aff1_b61e1b3dfdea.slice/crio-784e1df92a8f0191e3ee36e44709e30a2432852f1e7580e576cbfe7cb94d3d37 WatchSource:0}: Error finding container 784e1df92a8f0191e3ee36e44709e30a2432852f1e7580e576cbfe7cb94d3d37: Status 404 returned error can't find the container with id 784e1df92a8f0191e3ee36e44709e30a2432852f1e7580e576cbfe7cb94d3d37 Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.130367 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" event={"ID":"6e7cbead-369f-4781-979b-1751fea8561d","Type":"ContainerStarted","Data":"f8bfe6f6950c3286963b376399651522194803570c6d5cffe114b558e221666c"} Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.141654 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n76m8" event={"ID":"4d3e5dc3-470a-4fa0-b17c-733457329c79","Type":"ContainerStarted","Data":"701f41f037a39f412ce2742001aaa5c32cf9ab2bb1782f076f09928484ef7efc"} Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.142233 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-ltlmj" podStartSLOduration=3.142207108 podStartE2EDuration="3.142207108s" podCreationTimestamp="2025-12-01 09:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:19.131532806 +0000 UTC m=+1169.773256431" watchObservedRunningTime="2025-12-01 09:51:19.142207108 +0000 UTC m=+1169.783930723" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.150154 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.151609 4933 generic.go:334] "Generic (PLEG): container finished" podID="cd8b66b5-8ebe-4c29-8b02-a1889908b981" containerID="ab3fe041ae890ac68f5b168984fbcf1f09d022b83a2645f167033b4a3e66b875" exitCode=0 Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.151850 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" event={"ID":"cd8b66b5-8ebe-4c29-8b02-a1889908b981","Type":"ContainerDied","Data":"ab3fe041ae890ac68f5b168984fbcf1f09d022b83a2645f167033b4a3e66b875"} Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.151928 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" event={"ID":"cd8b66b5-8ebe-4c29-8b02-a1889908b981","Type":"ContainerStarted","Data":"95e43689bfff35da1a34fcd7f26f457b98b07488f1c1eefa056eff82124d4caf"} Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.169891 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xv6k6" event={"ID":"e925628e-0a4e-4893-b74d-7cd76160d44e","Type":"ContainerStarted","Data":"ec39fafe40cb1124fe562c39af12965099e3767a85c6ad02a63597c859ff1df6"} Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.199107 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xv6k6" podStartSLOduration=3.199070961 podStartE2EDuration="3.199070961s" podCreationTimestamp="2025-12-01 09:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:19.191421783 +0000 UTC m=+1169.833145398" watchObservedRunningTime="2025-12-01 09:51:19.199070961 +0000 UTC m=+1169.840794576" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.281955 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ft7zj"] Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.716217 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d22c4dcf-9fbf-430c-b820-268fa4a4eb2c" path="/var/lib/kubelet/pods/d22c4dcf-9fbf-430c-b820-268fa4a4eb2c/volumes" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.764234 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.806843 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.831205 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6fc68bdc87-867bw"] Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.862846 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.880800 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:51:19 crc kubenswrapper[4933]: W1201 09:51:19.897917 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8004752c_1593_4d70_908a_aa63dd3a18b9.slice/crio-8c503155a4349f3748b739242efd59f4e5d7ba39c4c9284f0ce852c39fd3e1b9 WatchSource:0}: Error finding container 8c503155a4349f3748b739242efd59f4e5d7ba39c4c9284f0ce852c39fd3e1b9: Status 404 returned error can't find the container with id 8c503155a4349f3748b739242efd59f4e5d7ba39c4c9284f0ce852c39fd3e1b9 Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.950996 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-768dbf8f5c-4lmql"] Dec 01 09:51:19 crc kubenswrapper[4933]: E1201 09:51:19.951865 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8b66b5-8ebe-4c29-8b02-a1889908b981" containerName="init" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.951954 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8b66b5-8ebe-4c29-8b02-a1889908b981" containerName="init" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.952253 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8b66b5-8ebe-4c29-8b02-a1889908b981" containerName="init" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.953821 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:19 crc kubenswrapper[4933]: I1201 09:51:19.991980 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.005879 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-ovsdbserver-sb\") pod \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.006000 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-config\") pod \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.006048 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-dns-swift-storage-0\") pod \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.006129 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-ovsdbserver-nb\") pod \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.006363 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrnsg\" (UniqueName: \"kubernetes.io/projected/cd8b66b5-8ebe-4c29-8b02-a1889908b981-kube-api-access-xrnsg\") pod \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.006511 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-dns-svc\") pod \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\" (UID: \"cd8b66b5-8ebe-4c29-8b02-a1889908b981\") " Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.011553 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-horizon-secret-key\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.011716 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl9gf\" (UniqueName: \"kubernetes.io/projected/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-kube-api-access-gl9gf\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.011946 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-logs\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.012190 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-config-data\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.012225 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-scripts\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.040859 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8b66b5-8ebe-4c29-8b02-a1889908b981-kube-api-access-xrnsg" (OuterVolumeSpecName: "kube-api-access-xrnsg") pod "cd8b66b5-8ebe-4c29-8b02-a1889908b981" (UID: "cd8b66b5-8ebe-4c29-8b02-a1889908b981"). InnerVolumeSpecName "kube-api-access-xrnsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.109816 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cd8b66b5-8ebe-4c29-8b02-a1889908b981" (UID: "cd8b66b5-8ebe-4c29-8b02-a1889908b981"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.110026 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd8b66b5-8ebe-4c29-8b02-a1889908b981" (UID: "cd8b66b5-8ebe-4c29-8b02-a1889908b981"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.116711 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-horizon-secret-key\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.117519 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl9gf\" (UniqueName: \"kubernetes.io/projected/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-kube-api-access-gl9gf\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.118298 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-logs\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.118908 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-config-data\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.118978 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-scripts\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.119586 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrnsg\" (UniqueName: \"kubernetes.io/projected/cd8b66b5-8ebe-4c29-8b02-a1889908b981-kube-api-access-xrnsg\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.119610 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.119624 4933 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.122542 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-logs\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.123701 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-config-data\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.123907 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd8b66b5-8ebe-4c29-8b02-a1889908b981" (UID: "cd8b66b5-8ebe-4c29-8b02-a1889908b981"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.132086 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-horizon-secret-key\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.132655 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd8b66b5-8ebe-4c29-8b02-a1889908b981" (UID: "cd8b66b5-8ebe-4c29-8b02-a1889908b981"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.132731 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-768dbf8f5c-4lmql"] Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.135134 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-scripts\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.135678 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-config" (OuterVolumeSpecName: "config") pod "cd8b66b5-8ebe-4c29-8b02-a1889908b981" (UID: "cd8b66b5-8ebe-4c29-8b02-a1889908b981"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.168296 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl9gf\" (UniqueName: \"kubernetes.io/projected/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-kube-api-access-gl9gf\") pod \"horizon-768dbf8f5c-4lmql\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.216891 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea","Type":"ContainerStarted","Data":"784e1df92a8f0191e3ee36e44709e30a2432852f1e7580e576cbfe7cb94d3d37"} Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.225226 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.225371 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.225432 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd8b66b5-8ebe-4c29-8b02-a1889908b981-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.230773 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.244779 4933 generic.go:334] "Generic (PLEG): container finished" podID="6e7cbead-369f-4781-979b-1751fea8561d" containerID="0d5d3d43043d01c4f107d69c3f4cafd1cef28c2670a0ee32256e8fa2818d66a2" exitCode=0 Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.245080 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" event={"ID":"6e7cbead-369f-4781-979b-1751fea8561d","Type":"ContainerDied","Data":"0d5d3d43043d01c4f107d69c3f4cafd1cef28c2670a0ee32256e8fa2818d66a2"} Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.254599 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35322d28-daef-4619-bd63-2baaa9ae46bd","Type":"ContainerStarted","Data":"b4a01308dab79f4e6f9186db2fa32f3b8e180372f1c8ea6dffdaa2b9782ee53c"} Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.268493 4933 generic.go:334] "Generic (PLEG): container finished" podID="ec93f2f8-8c54-443a-9560-4002b2b36a2c" containerID="fbdcf3e4fa43eafc587ff9421099cefc4464f0f681cdcae2cfdcb76a2816a5b9" exitCode=0 Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.268603 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" event={"ID":"ec93f2f8-8c54-443a-9560-4002b2b36a2c","Type":"ContainerDied","Data":"fbdcf3e4fa43eafc587ff9421099cefc4464f0f681cdcae2cfdcb76a2816a5b9"} Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.268635 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" event={"ID":"ec93f2f8-8c54-443a-9560-4002b2b36a2c","Type":"ContainerStarted","Data":"c2281e0f10956439f5009e178800bfdfa7bb4eafa852b4c7d7364275d5c6d37d"} Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.304957 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.305951 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-pd6bl" event={"ID":"cd8b66b5-8ebe-4c29-8b02-a1889908b981","Type":"ContainerDied","Data":"95e43689bfff35da1a34fcd7f26f457b98b07488f1c1eefa056eff82124d4caf"} Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.306025 4933 scope.go:117] "RemoveContainer" containerID="ab3fe041ae890ac68f5b168984fbcf1f09d022b83a2645f167033b4a3e66b875" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.325564 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8lwh6" event={"ID":"7853c81a-e365-473f-a4a7-4fcc87f625cd","Type":"ContainerStarted","Data":"dd8b57b327d99b7f517c48c820c9b4b005ed86fde35b7948a77fb030e6fb9ae5"} Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.330453 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8004752c-1593-4d70-908a-aa63dd3a18b9","Type":"ContainerStarted","Data":"8c503155a4349f3748b739242efd59f4e5d7ba39c4c9284f0ce852c39fd3e1b9"} Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.434824 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.602722 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-pd6bl"] Dec 01 09:51:20 crc kubenswrapper[4933]: I1201 09:51:20.613508 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-pd6bl"] Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:20.937260 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:20.981567 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-config\") pod \"6e7cbead-369f-4781-979b-1751fea8561d\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:20.981625 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld8cr\" (UniqueName: \"kubernetes.io/projected/6e7cbead-369f-4781-979b-1751fea8561d-kube-api-access-ld8cr\") pod \"6e7cbead-369f-4781-979b-1751fea8561d\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:20.981721 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-dns-svc\") pod \"6e7cbead-369f-4781-979b-1751fea8561d\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:20.981848 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-dns-swift-storage-0\") pod \"6e7cbead-369f-4781-979b-1751fea8561d\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:20.981890 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-ovsdbserver-nb\") pod \"6e7cbead-369f-4781-979b-1751fea8561d\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:20.981959 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-ovsdbserver-sb\") pod \"6e7cbead-369f-4781-979b-1751fea8561d\" (UID: \"6e7cbead-369f-4781-979b-1751fea8561d\") " Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:20.994510 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7cbead-369f-4781-979b-1751fea8561d-kube-api-access-ld8cr" (OuterVolumeSpecName: "kube-api-access-ld8cr") pod "6e7cbead-369f-4781-979b-1751fea8561d" (UID: "6e7cbead-369f-4781-979b-1751fea8561d"). InnerVolumeSpecName "kube-api-access-ld8cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.021639 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-config" (OuterVolumeSpecName: "config") pod "6e7cbead-369f-4781-979b-1751fea8561d" (UID: "6e7cbead-369f-4781-979b-1751fea8561d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.027994 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6e7cbead-369f-4781-979b-1751fea8561d" (UID: "6e7cbead-369f-4781-979b-1751fea8561d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.060738 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6e7cbead-369f-4781-979b-1751fea8561d" (UID: "6e7cbead-369f-4781-979b-1751fea8561d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.087053 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld8cr\" (UniqueName: \"kubernetes.io/projected/6e7cbead-369f-4781-979b-1751fea8561d-kube-api-access-ld8cr\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.087088 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.087100 4933 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.087111 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.088156 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6e7cbead-369f-4781-979b-1751fea8561d" (UID: "6e7cbead-369f-4781-979b-1751fea8561d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.158347 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6e7cbead-369f-4781-979b-1751fea8561d" (UID: "6e7cbead-369f-4781-979b-1751fea8561d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.189602 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.189665 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e7cbead-369f-4781-979b-1751fea8561d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.352932 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" event={"ID":"6e7cbead-369f-4781-979b-1751fea8561d","Type":"ContainerDied","Data":"f8bfe6f6950c3286963b376399651522194803570c6d5cffe114b558e221666c"} Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.353011 4933 scope.go:117] "RemoveContainer" containerID="0d5d3d43043d01c4f107d69c3f4cafd1cef28c2670a0ee32256e8fa2818d66a2" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.353004 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-gq26q" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.468524 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-gq26q"] Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.493757 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-gq26q"] Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.773959 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7cbead-369f-4781-979b-1751fea8561d" path="/var/lib/kubelet/pods/6e7cbead-369f-4781-979b-1751fea8561d/volumes" Dec 01 09:51:21 crc kubenswrapper[4933]: I1201 09:51:21.784099 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd8b66b5-8ebe-4c29-8b02-a1889908b981" path="/var/lib/kubelet/pods/cd8b66b5-8ebe-4c29-8b02-a1889908b981/volumes" Dec 01 09:51:22 crc kubenswrapper[4933]: I1201 09:51:22.352955 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-768dbf8f5c-4lmql"] Dec 01 09:51:22 crc kubenswrapper[4933]: W1201 09:51:22.373497 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c3bff5c_fb60_4d5d_a555_174c4d7ce1a5.slice/crio-eb669341880b4b91feacea450e6e7493a5c1a8adc05bda1e32ad99bbff200dc0 WatchSource:0}: Error finding container eb669341880b4b91feacea450e6e7493a5c1a8adc05bda1e32ad99bbff200dc0: Status 404 returned error can't find the container with id eb669341880b4b91feacea450e6e7493a5c1a8adc05bda1e32ad99bbff200dc0 Dec 01 09:51:22 crc kubenswrapper[4933]: I1201 09:51:22.434815 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" event={"ID":"ec93f2f8-8c54-443a-9560-4002b2b36a2c","Type":"ContainerStarted","Data":"ee793ae4ee03be502f4c3729336120f518234ff27751b14bac5fbe36fe1476c7"} Dec 01 09:51:22 crc kubenswrapper[4933]: I1201 09:51:22.436437 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:22 crc kubenswrapper[4933]: I1201 09:51:22.443135 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8004752c-1593-4d70-908a-aa63dd3a18b9","Type":"ContainerStarted","Data":"c1d6c29ce097233dfdc33730604818878fc30c1b3e473150d02441c5f193f110"} Dec 01 09:51:22 crc kubenswrapper[4933]: I1201 09:51:22.461809 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" podStartSLOduration=5.46178101 podStartE2EDuration="5.46178101s" podCreationTimestamp="2025-12-01 09:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:22.451837697 +0000 UTC m=+1173.093561312" watchObservedRunningTime="2025-12-01 09:51:22.46178101 +0000 UTC m=+1173.103504625" Dec 01 09:51:22 crc kubenswrapper[4933]: I1201 09:51:22.466984 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35322d28-daef-4619-bd63-2baaa9ae46bd","Type":"ContainerStarted","Data":"f190ad7a0509497102623ab03b5b417ccd617a5d156ac1c4364a03c58e71f35b"} Dec 01 09:51:23 crc kubenswrapper[4933]: I1201 09:51:23.496219 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-768dbf8f5c-4lmql" event={"ID":"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5","Type":"ContainerStarted","Data":"eb669341880b4b91feacea450e6e7493a5c1a8adc05bda1e32ad99bbff200dc0"} Dec 01 09:51:23 crc kubenswrapper[4933]: I1201 09:51:23.507885 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8004752c-1593-4d70-908a-aa63dd3a18b9" containerName="glance-log" containerID="cri-o://c1d6c29ce097233dfdc33730604818878fc30c1b3e473150d02441c5f193f110" gracePeriod=30 Dec 01 09:51:23 crc kubenswrapper[4933]: I1201 09:51:23.508016 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8004752c-1593-4d70-908a-aa63dd3a18b9" containerName="glance-httpd" containerID="cri-o://323b704850b279cbaa4f5903fc3c8edf9c05b37bb41d2ab6cb5877ace242d05a" gracePeriod=30 Dec 01 09:51:23 crc kubenswrapper[4933]: I1201 09:51:23.507852 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8004752c-1593-4d70-908a-aa63dd3a18b9","Type":"ContainerStarted","Data":"323b704850b279cbaa4f5903fc3c8edf9c05b37bb41d2ab6cb5877ace242d05a"} Dec 01 09:51:23 crc kubenswrapper[4933]: I1201 09:51:23.547633 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.5476085600000005 podStartE2EDuration="6.54760856s" podCreationTimestamp="2025-12-01 09:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:23.534378227 +0000 UTC m=+1174.176101852" watchObservedRunningTime="2025-12-01 09:51:23.54760856 +0000 UTC m=+1174.189332175" Dec 01 09:51:24 crc kubenswrapper[4933]: I1201 09:51:24.535814 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35322d28-daef-4619-bd63-2baaa9ae46bd","Type":"ContainerStarted","Data":"94d2bc51c4734946e558f8ee2fba0364622c9c0d818fb15230a3163bcd8690da"} Dec 01 09:51:24 crc kubenswrapper[4933]: I1201 09:51:24.536140 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="35322d28-daef-4619-bd63-2baaa9ae46bd" containerName="glance-log" containerID="cri-o://f190ad7a0509497102623ab03b5b417ccd617a5d156ac1c4364a03c58e71f35b" gracePeriod=30 Dec 01 09:51:24 crc kubenswrapper[4933]: I1201 09:51:24.536254 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="35322d28-daef-4619-bd63-2baaa9ae46bd" containerName="glance-httpd" containerID="cri-o://94d2bc51c4734946e558f8ee2fba0364622c9c0d818fb15230a3163bcd8690da" gracePeriod=30 Dec 01 09:51:24 crc kubenswrapper[4933]: I1201 09:51:24.551943 4933 generic.go:334] "Generic (PLEG): container finished" podID="e925628e-0a4e-4893-b74d-7cd76160d44e" containerID="ec39fafe40cb1124fe562c39af12965099e3767a85c6ad02a63597c859ff1df6" exitCode=0 Dec 01 09:51:24 crc kubenswrapper[4933]: I1201 09:51:24.552183 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xv6k6" event={"ID":"e925628e-0a4e-4893-b74d-7cd76160d44e","Type":"ContainerDied","Data":"ec39fafe40cb1124fe562c39af12965099e3767a85c6ad02a63597c859ff1df6"} Dec 01 09:51:24 crc kubenswrapper[4933]: I1201 09:51:24.557536 4933 generic.go:334] "Generic (PLEG): container finished" podID="8004752c-1593-4d70-908a-aa63dd3a18b9" containerID="323b704850b279cbaa4f5903fc3c8edf9c05b37bb41d2ab6cb5877ace242d05a" exitCode=0 Dec 01 09:51:24 crc kubenswrapper[4933]: I1201 09:51:24.557568 4933 generic.go:334] "Generic (PLEG): container finished" podID="8004752c-1593-4d70-908a-aa63dd3a18b9" containerID="c1d6c29ce097233dfdc33730604818878fc30c1b3e473150d02441c5f193f110" exitCode=143 Dec 01 09:51:24 crc kubenswrapper[4933]: I1201 09:51:24.557987 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8004752c-1593-4d70-908a-aa63dd3a18b9","Type":"ContainerDied","Data":"323b704850b279cbaa4f5903fc3c8edf9c05b37bb41d2ab6cb5877ace242d05a"} Dec 01 09:51:24 crc kubenswrapper[4933]: I1201 09:51:24.558048 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8004752c-1593-4d70-908a-aa63dd3a18b9","Type":"ContainerDied","Data":"c1d6c29ce097233dfdc33730604818878fc30c1b3e473150d02441c5f193f110"} Dec 01 09:51:24 crc kubenswrapper[4933]: I1201 09:51:24.567655 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.56762926 podStartE2EDuration="7.56762926s" podCreationTimestamp="2025-12-01 09:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:24.56276903 +0000 UTC m=+1175.204492665" watchObservedRunningTime="2025-12-01 09:51:24.56762926 +0000 UTC m=+1175.209352865" Dec 01 09:51:25 crc kubenswrapper[4933]: I1201 09:51:25.580167 4933 generic.go:334] "Generic (PLEG): container finished" podID="35322d28-daef-4619-bd63-2baaa9ae46bd" containerID="94d2bc51c4734946e558f8ee2fba0364622c9c0d818fb15230a3163bcd8690da" exitCode=0 Dec 01 09:51:25 crc kubenswrapper[4933]: I1201 09:51:25.580900 4933 generic.go:334] "Generic (PLEG): container finished" podID="35322d28-daef-4619-bd63-2baaa9ae46bd" containerID="f190ad7a0509497102623ab03b5b417ccd617a5d156ac1c4364a03c58e71f35b" exitCode=143 Dec 01 09:51:25 crc kubenswrapper[4933]: I1201 09:51:25.582856 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35322d28-daef-4619-bd63-2baaa9ae46bd","Type":"ContainerDied","Data":"94d2bc51c4734946e558f8ee2fba0364622c9c0d818fb15230a3163bcd8690da"} Dec 01 09:51:25 crc kubenswrapper[4933]: I1201 09:51:25.582988 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35322d28-daef-4619-bd63-2baaa9ae46bd","Type":"ContainerDied","Data":"f190ad7a0509497102623ab03b5b417ccd617a5d156ac1c4364a03c58e71f35b"} Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.368873 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.457329 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74ddcb8d87-g7ljk"] Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.627690 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7blxn"] Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.628351 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-7blxn" podUID="4696ea4a-1bc2-4e1b-9209-67beddf255e8" containerName="dnsmasq-dns" containerID="cri-o://381a2f73e98940958a70a494e091cbc33ed05279cb6bb15e5418fe0fc4ffcd89" gracePeriod=10 Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.680358 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6775f97bdb-vs7m8"] Dec 01 09:51:28 crc kubenswrapper[4933]: E1201 09:51:28.680994 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7cbead-369f-4781-979b-1751fea8561d" containerName="init" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.681021 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7cbead-369f-4781-979b-1751fea8561d" containerName="init" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.681272 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7cbead-369f-4781-979b-1751fea8561d" containerName="init" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.695069 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.706320 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.709750 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6775f97bdb-vs7m8"] Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.764784 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-768dbf8f5c-4lmql"] Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.786727 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75479c6864-2fvz5"] Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.788813 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.801414 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75479c6864-2fvz5"] Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.815030 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ffcfb41-8086-4e28-b88a-da47dd38a844-scripts\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.815152 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-horizon-secret-key\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.815200 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ffcfb41-8086-4e28-b88a-da47dd38a844-logs\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.815232 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-combined-ca-bundle\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.815256 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ffcfb41-8086-4e28-b88a-da47dd38a844-config-data\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.815276 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkfjf\" (UniqueName: \"kubernetes.io/projected/6ffcfb41-8086-4e28-b88a-da47dd38a844-kube-api-access-hkfjf\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.815389 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-horizon-tls-certs\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.916976 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-horizon-tls-certs\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.917443 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/000656f6-99fd-43a3-8ade-31b200d0c18a-horizon-tls-certs\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.917484 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000656f6-99fd-43a3-8ade-31b200d0c18a-combined-ca-bundle\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.917532 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ffcfb41-8086-4e28-b88a-da47dd38a844-scripts\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.917564 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9zpr\" (UniqueName: \"kubernetes.io/projected/000656f6-99fd-43a3-8ade-31b200d0c18a-kube-api-access-v9zpr\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.917603 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/000656f6-99fd-43a3-8ade-31b200d0c18a-config-data\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.917632 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-horizon-secret-key\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.917649 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/000656f6-99fd-43a3-8ade-31b200d0c18a-scripts\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.917686 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ffcfb41-8086-4e28-b88a-da47dd38a844-logs\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.917709 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/000656f6-99fd-43a3-8ade-31b200d0c18a-horizon-secret-key\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.917776 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-combined-ca-bundle\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.917794 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/000656f6-99fd-43a3-8ade-31b200d0c18a-logs\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.917814 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ffcfb41-8086-4e28-b88a-da47dd38a844-config-data\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.917833 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkfjf\" (UniqueName: \"kubernetes.io/projected/6ffcfb41-8086-4e28-b88a-da47dd38a844-kube-api-access-hkfjf\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.919954 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ffcfb41-8086-4e28-b88a-da47dd38a844-logs\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.921838 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ffcfb41-8086-4e28-b88a-da47dd38a844-config-data\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.922698 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ffcfb41-8086-4e28-b88a-da47dd38a844-scripts\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.936333 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-horizon-tls-certs\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.936663 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-combined-ca-bundle\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.943859 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkfjf\" (UniqueName: \"kubernetes.io/projected/6ffcfb41-8086-4e28-b88a-da47dd38a844-kube-api-access-hkfjf\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:28 crc kubenswrapper[4933]: I1201 09:51:28.945188 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-horizon-secret-key\") pod \"horizon-6775f97bdb-vs7m8\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.020041 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/000656f6-99fd-43a3-8ade-31b200d0c18a-logs\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.020175 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/000656f6-99fd-43a3-8ade-31b200d0c18a-horizon-tls-certs\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.020233 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000656f6-99fd-43a3-8ade-31b200d0c18a-combined-ca-bundle\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.020341 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9zpr\" (UniqueName: \"kubernetes.io/projected/000656f6-99fd-43a3-8ade-31b200d0c18a-kube-api-access-v9zpr\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.020406 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/000656f6-99fd-43a3-8ade-31b200d0c18a-config-data\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.020436 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/000656f6-99fd-43a3-8ade-31b200d0c18a-scripts\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.020489 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/000656f6-99fd-43a3-8ade-31b200d0c18a-horizon-secret-key\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.020617 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/000656f6-99fd-43a3-8ade-31b200d0c18a-logs\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.025205 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/000656f6-99fd-43a3-8ade-31b200d0c18a-config-data\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.029298 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/000656f6-99fd-43a3-8ade-31b200d0c18a-scripts\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.030691 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/000656f6-99fd-43a3-8ade-31b200d0c18a-horizon-secret-key\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.031189 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/000656f6-99fd-43a3-8ade-31b200d0c18a-horizon-tls-certs\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.032352 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000656f6-99fd-43a3-8ade-31b200d0c18a-combined-ca-bundle\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.054875 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9zpr\" (UniqueName: \"kubernetes.io/projected/000656f6-99fd-43a3-8ade-31b200d0c18a-kube-api-access-v9zpr\") pod \"horizon-75479c6864-2fvz5\" (UID: \"000656f6-99fd-43a3-8ade-31b200d0c18a\") " pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.058543 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.135418 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.681438 4933 generic.go:334] "Generic (PLEG): container finished" podID="4696ea4a-1bc2-4e1b-9209-67beddf255e8" containerID="381a2f73e98940958a70a494e091cbc33ed05279cb6bb15e5418fe0fc4ffcd89" exitCode=0 Dec 01 09:51:29 crc kubenswrapper[4933]: I1201 09:51:29.688886 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7blxn" event={"ID":"4696ea4a-1bc2-4e1b-9209-67beddf255e8","Type":"ContainerDied","Data":"381a2f73e98940958a70a494e091cbc33ed05279cb6bb15e5418fe0fc4ffcd89"} Dec 01 09:51:32 crc kubenswrapper[4933]: I1201 09:51:32.209697 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7blxn" podUID="4696ea4a-1bc2-4e1b-9209-67beddf255e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Dec 01 09:51:36 crc kubenswrapper[4933]: E1201 09:51:36.299643 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 01 09:51:36 crc kubenswrapper[4933]: E1201 09:51:36.300451 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh6fhd8h668h98h566h7h67fh56ch5fh6ch558h66ch5cdh64h5bdhf6h77h679h9bh56dh57dh67h679h5f6h9h568hc6h578h8h66dh58fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4c47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6fc68bdc87-867bw_openstack(c09596af-78c2-4f79-8f0b-121ef7f9ef9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:51:36 crc kubenswrapper[4933]: E1201 09:51:36.303778 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6fc68bdc87-867bw" podUID="c09596af-78c2-4f79-8f0b-121ef7f9ef9a" Dec 01 09:51:36 crc kubenswrapper[4933]: E1201 09:51:36.605662 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 01 09:51:36 crc kubenswrapper[4933]: E1201 09:51:36.606160 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fh5f5h579h99h95h5dfh697h5b6hffh55dhch64dh566h99h89h649hf5h664hfch77h55bh54ch66bh567h5h8bh5f9hc5h55bh589h644h66q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7s6wm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:51:36 crc kubenswrapper[4933]: E1201 09:51:36.674595 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 01 09:51:36 crc kubenswrapper[4933]: E1201 09:51:36.674973 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c5h74h65bh5fdh649h674h5dfh547h567h6dh5d9h59bhc4hbfh5bdhd5h9h58h8bh58fh5bhc6h554h95h676h56dhd5h59hc5h577h545h5ffq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9f4cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-74ddcb8d87-g7ljk_openstack(1eb99423-f394-4f95-9279-13f68e394e4f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:51:36 crc kubenswrapper[4933]: E1201 09:51:36.678516 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-74ddcb8d87-g7ljk" podUID="1eb99423-f394-4f95-9279-13f68e394e4f" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.717813 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.727390 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.790829 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xv6k6" event={"ID":"e925628e-0a4e-4893-b74d-7cd76160d44e","Type":"ContainerDied","Data":"c8c24d122becaec36c26ba51b95805c4e9b71e56c64081cb8a92f853ebe53a2d"} Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.790865 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xv6k6" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.790886 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8c24d122becaec36c26ba51b95805c4e9b71e56c64081cb8a92f853ebe53a2d" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.794074 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8004752c-1593-4d70-908a-aa63dd3a18b9","Type":"ContainerDied","Data":"8c503155a4349f3748b739242efd59f4e5d7ba39c4c9284f0ce852c39fd3e1b9"} Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.794165 4933 scope.go:117] "RemoveContainer" containerID="323b704850b279cbaa4f5903fc3c8edf9c05b37bb41d2ab6cb5877ace242d05a" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.794219 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.812977 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"8004752c-1593-4d70-908a-aa63dd3a18b9\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.813054 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-fernet-keys\") pod \"e925628e-0a4e-4893-b74d-7cd76160d44e\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.813083 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-config-data\") pod \"e925628e-0a4e-4893-b74d-7cd76160d44e\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.813109 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-config-data\") pod \"8004752c-1593-4d70-908a-aa63dd3a18b9\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.813298 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-scripts\") pod \"e925628e-0a4e-4893-b74d-7cd76160d44e\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.813383 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-combined-ca-bundle\") pod \"e925628e-0a4e-4893-b74d-7cd76160d44e\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.813406 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8004752c-1593-4d70-908a-aa63dd3a18b9-httpd-run\") pod \"8004752c-1593-4d70-908a-aa63dd3a18b9\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.813462 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l24r\" (UniqueName: \"kubernetes.io/projected/e925628e-0a4e-4893-b74d-7cd76160d44e-kube-api-access-4l24r\") pod \"e925628e-0a4e-4893-b74d-7cd76160d44e\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.813491 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8004752c-1593-4d70-908a-aa63dd3a18b9-logs\") pod \"8004752c-1593-4d70-908a-aa63dd3a18b9\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.813533 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-credential-keys\") pod \"e925628e-0a4e-4893-b74d-7cd76160d44e\" (UID: \"e925628e-0a4e-4893-b74d-7cd76160d44e\") " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.813596 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg487\" (UniqueName: \"kubernetes.io/projected/8004752c-1593-4d70-908a-aa63dd3a18b9-kube-api-access-tg487\") pod \"8004752c-1593-4d70-908a-aa63dd3a18b9\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.813677 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-scripts\") pod \"8004752c-1593-4d70-908a-aa63dd3a18b9\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.813716 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-combined-ca-bundle\") pod \"8004752c-1593-4d70-908a-aa63dd3a18b9\" (UID: \"8004752c-1593-4d70-908a-aa63dd3a18b9\") " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.815017 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8004752c-1593-4d70-908a-aa63dd3a18b9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8004752c-1593-4d70-908a-aa63dd3a18b9" (UID: "8004752c-1593-4d70-908a-aa63dd3a18b9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.815338 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8004752c-1593-4d70-908a-aa63dd3a18b9-logs" (OuterVolumeSpecName: "logs") pod "8004752c-1593-4d70-908a-aa63dd3a18b9" (UID: "8004752c-1593-4d70-908a-aa63dd3a18b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.825999 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e925628e-0a4e-4893-b74d-7cd76160d44e" (UID: "e925628e-0a4e-4893-b74d-7cd76160d44e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.826044 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-scripts" (OuterVolumeSpecName: "scripts") pod "e925628e-0a4e-4893-b74d-7cd76160d44e" (UID: "e925628e-0a4e-4893-b74d-7cd76160d44e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.826063 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8004752c-1593-4d70-908a-aa63dd3a18b9-kube-api-access-tg487" (OuterVolumeSpecName: "kube-api-access-tg487") pod "8004752c-1593-4d70-908a-aa63dd3a18b9" (UID: "8004752c-1593-4d70-908a-aa63dd3a18b9"). InnerVolumeSpecName "kube-api-access-tg487". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.832005 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-scripts" (OuterVolumeSpecName: "scripts") pod "8004752c-1593-4d70-908a-aa63dd3a18b9" (UID: "8004752c-1593-4d70-908a-aa63dd3a18b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.845651 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e925628e-0a4e-4893-b74d-7cd76160d44e" (UID: "e925628e-0a4e-4893-b74d-7cd76160d44e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.847194 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "8004752c-1593-4d70-908a-aa63dd3a18b9" (UID: "8004752c-1593-4d70-908a-aa63dd3a18b9"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.857740 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-config-data" (OuterVolumeSpecName: "config-data") pod "e925628e-0a4e-4893-b74d-7cd76160d44e" (UID: "e925628e-0a4e-4893-b74d-7cd76160d44e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.861194 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8004752c-1593-4d70-908a-aa63dd3a18b9" (UID: "8004752c-1593-4d70-908a-aa63dd3a18b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.864034 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e925628e-0a4e-4893-b74d-7cd76160d44e-kube-api-access-4l24r" (OuterVolumeSpecName: "kube-api-access-4l24r") pod "e925628e-0a4e-4893-b74d-7cd76160d44e" (UID: "e925628e-0a4e-4893-b74d-7cd76160d44e"). InnerVolumeSpecName "kube-api-access-4l24r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.875515 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e925628e-0a4e-4893-b74d-7cd76160d44e" (UID: "e925628e-0a4e-4893-b74d-7cd76160d44e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.886712 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-config-data" (OuterVolumeSpecName: "config-data") pod "8004752c-1593-4d70-908a-aa63dd3a18b9" (UID: "8004752c-1593-4d70-908a-aa63dd3a18b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.919990 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.920566 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.920594 4933 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8004752c-1593-4d70-908a-aa63dd3a18b9-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.920610 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l24r\" (UniqueName: \"kubernetes.io/projected/e925628e-0a4e-4893-b74d-7cd76160d44e-kube-api-access-4l24r\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.920629 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8004752c-1593-4d70-908a-aa63dd3a18b9-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.920641 4933 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.920656 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg487\" (UniqueName: \"kubernetes.io/projected/8004752c-1593-4d70-908a-aa63dd3a18b9-kube-api-access-tg487\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.920668 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.920682 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.920730 4933 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.920759 4933 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.920774 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e925628e-0a4e-4893-b74d-7cd76160d44e-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.920791 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8004752c-1593-4d70-908a-aa63dd3a18b9-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:36 crc kubenswrapper[4933]: I1201 09:51:36.946249 4933 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.023507 4933 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.151260 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.166451 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.180975 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:51:37 crc kubenswrapper[4933]: E1201 09:51:37.185422 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e925628e-0a4e-4893-b74d-7cd76160d44e" containerName="keystone-bootstrap" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.185469 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="e925628e-0a4e-4893-b74d-7cd76160d44e" containerName="keystone-bootstrap" Dec 01 09:51:37 crc kubenswrapper[4933]: E1201 09:51:37.185508 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8004752c-1593-4d70-908a-aa63dd3a18b9" containerName="glance-log" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.185519 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="8004752c-1593-4d70-908a-aa63dd3a18b9" containerName="glance-log" Dec 01 09:51:37 crc kubenswrapper[4933]: E1201 09:51:37.185537 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8004752c-1593-4d70-908a-aa63dd3a18b9" containerName="glance-httpd" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.185545 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="8004752c-1593-4d70-908a-aa63dd3a18b9" containerName="glance-httpd" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.185824 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="e925628e-0a4e-4893-b74d-7cd76160d44e" containerName="keystone-bootstrap" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.185842 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="8004752c-1593-4d70-908a-aa63dd3a18b9" containerName="glance-httpd" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.185862 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="8004752c-1593-4d70-908a-aa63dd3a18b9" containerName="glance-log" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.187235 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.194082 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.215718 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.218048 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.221998 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7blxn" podUID="4696ea4a-1bc2-4e1b-9209-67beddf255e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.339657 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6htr\" (UniqueName: \"kubernetes.io/projected/02e5b782-8d28-4206-aeb2-a9f1976abc8f-kube-api-access-t6htr\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.339799 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.339894 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-config-data\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.339945 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.339983 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02e5b782-8d28-4206-aeb2-a9f1976abc8f-logs\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.340108 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-scripts\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.340285 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.340619 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02e5b782-8d28-4206-aeb2-a9f1976abc8f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.442125 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.442580 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02e5b782-8d28-4206-aeb2-a9f1976abc8f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.442646 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6htr\" (UniqueName: \"kubernetes.io/projected/02e5b782-8d28-4206-aeb2-a9f1976abc8f-kube-api-access-t6htr\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.442676 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.442707 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-config-data\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.442729 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.442756 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02e5b782-8d28-4206-aeb2-a9f1976abc8f-logs\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.442803 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-scripts\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.444945 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02e5b782-8d28-4206-aeb2-a9f1976abc8f-logs\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.445189 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02e5b782-8d28-4206-aeb2-a9f1976abc8f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.442320 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.447482 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-scripts\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.450063 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.450272 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.451053 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-config-data\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.467017 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6htr\" (UniqueName: \"kubernetes.io/projected/02e5b782-8d28-4206-aeb2-a9f1976abc8f-kube-api-access-t6htr\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.486015 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.543289 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.705004 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8004752c-1593-4d70-908a-aa63dd3a18b9" path="/var/lib/kubelet/pods/8004752c-1593-4d70-908a-aa63dd3a18b9/volumes" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.908354 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xv6k6"] Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.927568 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xv6k6"] Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.947141 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cstm4"] Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.950165 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.953674 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.954201 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mbpv6" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.954614 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.954838 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.963548 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 09:51:37 crc kubenswrapper[4933]: I1201 09:51:37.989145 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cstm4"] Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.079915 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-combined-ca-bundle\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.080071 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-scripts\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.080116 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-credential-keys\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.080319 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnlwq\" (UniqueName: \"kubernetes.io/projected/321a4d39-7ce4-4385-a6f5-5204da92683b-kube-api-access-nnlwq\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.080352 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-fernet-keys\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.080413 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-config-data\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.182565 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnlwq\" (UniqueName: \"kubernetes.io/projected/321a4d39-7ce4-4385-a6f5-5204da92683b-kube-api-access-nnlwq\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.182930 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-fernet-keys\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.182962 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-config-data\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.183000 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-combined-ca-bundle\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.183802 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-scripts\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.183834 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-credential-keys\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.190584 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-credential-keys\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.190669 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-scripts\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.191164 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-config-data\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.191171 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-combined-ca-bundle\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.199524 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-fernet-keys\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.206010 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnlwq\" (UniqueName: \"kubernetes.io/projected/321a4d39-7ce4-4385-a6f5-5204da92683b-kube-api-access-nnlwq\") pod \"keystone-bootstrap-cstm4\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:38 crc kubenswrapper[4933]: I1201 09:51:38.283695 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:51:39 crc kubenswrapper[4933]: I1201 09:51:39.682462 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e925628e-0a4e-4893-b74d-7cd76160d44e" path="/var/lib/kubelet/pods/e925628e-0a4e-4893-b74d-7cd76160d44e/volumes" Dec 01 09:51:41 crc kubenswrapper[4933]: I1201 09:51:41.748108 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:51:41 crc kubenswrapper[4933]: I1201 09:51:41.748224 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:51:45 crc kubenswrapper[4933]: I1201 09:51:45.922722 4933 generic.go:334] "Generic (PLEG): container finished" podID="c4531da4-441c-4003-9f20-719853edb0b4" containerID="0649f1c6942f179c383337224226162a54f82fdbc301ffc50fafa75a579626ee" exitCode=0 Dec 01 09:51:45 crc kubenswrapper[4933]: I1201 09:51:45.922816 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ltlmj" event={"ID":"c4531da4-441c-4003-9f20-719853edb0b4","Type":"ContainerDied","Data":"0649f1c6942f179c383337224226162a54f82fdbc301ffc50fafa75a579626ee"} Dec 01 09:51:47 crc kubenswrapper[4933]: I1201 09:51:47.210243 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7blxn" podUID="4696ea4a-1bc2-4e1b-9209-67beddf255e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: i/o timeout" Dec 01 09:51:47 crc kubenswrapper[4933]: I1201 09:51:47.211009 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:51:47 crc kubenswrapper[4933]: E1201 09:51:47.790925 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 01 09:51:47 crc kubenswrapper[4933]: E1201 09:51:47.791158 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n544h68dh54h687hd9h68ch5dh64ch5cbh57h586h58h5f7h8bhbbh64bh69h6fh5c8h68ch84h6ch5cfh57fh5f9h6bh574hf7h57bh5b7h68h8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gl9gf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-768dbf8f5c-4lmql_openstack(6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:51:47 crc kubenswrapper[4933]: E1201 09:51:47.797944 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-768dbf8f5c-4lmql" podUID="6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5" Dec 01 09:51:47 crc kubenswrapper[4933]: E1201 09:51:47.816364 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 01 09:51:47 crc kubenswrapper[4933]: E1201 09:51:47.816648 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q29tl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-n76m8_openstack(4d3e5dc3-470a-4fa0-b17c-733457329c79): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:51:47 crc kubenswrapper[4933]: E1201 09:51:47.817894 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-n76m8" podUID="4d3e5dc3-470a-4fa0-b17c-733457329c79" Dec 01 09:51:47 crc kubenswrapper[4933]: I1201 09:51:47.901229 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:47 crc kubenswrapper[4933]: I1201 09:51:47.947779 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fc68bdc87-867bw" event={"ID":"c09596af-78c2-4f79-8f0b-121ef7f9ef9a","Type":"ContainerDied","Data":"b71ba3e5d3edca4e9f822adb2f477cd21a48ac18e581e96211b5b01080450bbc"} Dec 01 09:51:47 crc kubenswrapper[4933]: I1201 09:51:47.947910 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fc68bdc87-867bw" Dec 01 09:51:47 crc kubenswrapper[4933]: E1201 09:51:47.950397 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-n76m8" podUID="4d3e5dc3-470a-4fa0-b17c-733457329c79" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.020401 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4c47\" (UniqueName: \"kubernetes.io/projected/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-kube-api-access-f4c47\") pod \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.020495 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-horizon-secret-key\") pod \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.020530 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-scripts\") pod \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.020671 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-config-data\") pod \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.020758 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-logs\") pod \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\" (UID: \"c09596af-78c2-4f79-8f0b-121ef7f9ef9a\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.021359 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-scripts" (OuterVolumeSpecName: "scripts") pod "c09596af-78c2-4f79-8f0b-121ef7f9ef9a" (UID: "c09596af-78c2-4f79-8f0b-121ef7f9ef9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.021774 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-config-data" (OuterVolumeSpecName: "config-data") pod "c09596af-78c2-4f79-8f0b-121ef7f9ef9a" (UID: "c09596af-78c2-4f79-8f0b-121ef7f9ef9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.022470 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-logs" (OuterVolumeSpecName: "logs") pod "c09596af-78c2-4f79-8f0b-121ef7f9ef9a" (UID: "c09596af-78c2-4f79-8f0b-121ef7f9ef9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.027211 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c09596af-78c2-4f79-8f0b-121ef7f9ef9a" (UID: "c09596af-78c2-4f79-8f0b-121ef7f9ef9a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.028035 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-kube-api-access-f4c47" (OuterVolumeSpecName: "kube-api-access-f4c47") pod "c09596af-78c2-4f79-8f0b-121ef7f9ef9a" (UID: "c09596af-78c2-4f79-8f0b-121ef7f9ef9a"). InnerVolumeSpecName "kube-api-access-f4c47". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.123340 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.123804 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.123844 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4c47\" (UniqueName: \"kubernetes.io/projected/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-kube-api-access-f4c47\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.123861 4933 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.123874 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c09596af-78c2-4f79-8f0b-121ef7f9ef9a-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.325622 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6fc68bdc87-867bw"] Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.334333 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6fc68bdc87-867bw"] Dec 01 09:51:48 crc kubenswrapper[4933]: E1201 09:51:48.474298 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 01 09:51:48 crc kubenswrapper[4933]: E1201 09:51:48.474582 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ss9td,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-qw69v_openstack(82f89d96-ceb5-4012-9273-68d00cb0780b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:51:48 crc kubenswrapper[4933]: E1201 09:51:48.475741 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-qw69v" podUID="82f89d96-ceb5-4012-9273-68d00cb0780b" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.534741 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.546552 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.551036 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.567761 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ltlmj" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.574434 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.633921 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-ovsdbserver-sb\") pod \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.633978 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-combined-ca-bundle\") pod \"35322d28-daef-4619-bd63-2baaa9ae46bd\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634044 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"35322d28-daef-4619-bd63-2baaa9ae46bd\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634085 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9b9g\" (UniqueName: \"kubernetes.io/projected/4696ea4a-1bc2-4e1b-9209-67beddf255e8-kube-api-access-m9b9g\") pod \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634122 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4531da4-441c-4003-9f20-719853edb0b4-config\") pod \"c4531da4-441c-4003-9f20-719853edb0b4\" (UID: \"c4531da4-441c-4003-9f20-719853edb0b4\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634145 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1eb99423-f394-4f95-9279-13f68e394e4f-config-data\") pod \"1eb99423-f394-4f95-9279-13f68e394e4f\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634174 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-ovsdbserver-nb\") pod \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634202 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1eb99423-f394-4f95-9279-13f68e394e4f-logs\") pod \"1eb99423-f394-4f95-9279-13f68e394e4f\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634224 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f4cd\" (UniqueName: \"kubernetes.io/projected/1eb99423-f394-4f95-9279-13f68e394e4f-kube-api-access-9f4cd\") pod \"1eb99423-f394-4f95-9279-13f68e394e4f\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634251 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-config\") pod \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634276 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-logs\") pod \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634320 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-scripts\") pod \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634353 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-dns-svc\") pod \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\" (UID: \"4696ea4a-1bc2-4e1b-9209-67beddf255e8\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634375 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1eb99423-f394-4f95-9279-13f68e394e4f-horizon-secret-key\") pod \"1eb99423-f394-4f95-9279-13f68e394e4f\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634392 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35322d28-daef-4619-bd63-2baaa9ae46bd-logs\") pod \"35322d28-daef-4619-bd63-2baaa9ae46bd\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634412 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbwh7\" (UniqueName: \"kubernetes.io/projected/35322d28-daef-4619-bd63-2baaa9ae46bd-kube-api-access-hbwh7\") pod \"35322d28-daef-4619-bd63-2baaa9ae46bd\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634436 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1eb99423-f394-4f95-9279-13f68e394e4f-scripts\") pod \"1eb99423-f394-4f95-9279-13f68e394e4f\" (UID: \"1eb99423-f394-4f95-9279-13f68e394e4f\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634464 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35322d28-daef-4619-bd63-2baaa9ae46bd-httpd-run\") pod \"35322d28-daef-4619-bd63-2baaa9ae46bd\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634483 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-config-data\") pod \"35322d28-daef-4619-bd63-2baaa9ae46bd\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634522 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np5hh\" (UniqueName: \"kubernetes.io/projected/c4531da4-441c-4003-9f20-719853edb0b4-kube-api-access-np5hh\") pod \"c4531da4-441c-4003-9f20-719853edb0b4\" (UID: \"c4531da4-441c-4003-9f20-719853edb0b4\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634563 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-horizon-secret-key\") pod \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634592 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-config-data\") pod \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634610 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl9gf\" (UniqueName: \"kubernetes.io/projected/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-kube-api-access-gl9gf\") pod \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\" (UID: \"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634630 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4531da4-441c-4003-9f20-719853edb0b4-combined-ca-bundle\") pod \"c4531da4-441c-4003-9f20-719853edb0b4\" (UID: \"c4531da4-441c-4003-9f20-719853edb0b4\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.634648 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-scripts\") pod \"35322d28-daef-4619-bd63-2baaa9ae46bd\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.635713 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eb99423-f394-4f95-9279-13f68e394e4f-logs" (OuterVolumeSpecName: "logs") pod "1eb99423-f394-4f95-9279-13f68e394e4f" (UID: "1eb99423-f394-4f95-9279-13f68e394e4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.637189 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-logs" (OuterVolumeSpecName: "logs") pod "6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5" (UID: "6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.639601 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-scripts" (OuterVolumeSpecName: "scripts") pod "6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5" (UID: "6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.639791 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb99423-f394-4f95-9279-13f68e394e4f-config-data" (OuterVolumeSpecName: "config-data") pod "1eb99423-f394-4f95-9279-13f68e394e4f" (UID: "1eb99423-f394-4f95-9279-13f68e394e4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.640299 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "35322d28-daef-4619-bd63-2baaa9ae46bd" (UID: "35322d28-daef-4619-bd63-2baaa9ae46bd"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.640539 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-config-data" (OuterVolumeSpecName: "config-data") pod "6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5" (UID: "6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.642781 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4696ea4a-1bc2-4e1b-9209-67beddf255e8-kube-api-access-m9b9g" (OuterVolumeSpecName: "kube-api-access-m9b9g") pod "4696ea4a-1bc2-4e1b-9209-67beddf255e8" (UID: "4696ea4a-1bc2-4e1b-9209-67beddf255e8"). InnerVolumeSpecName "kube-api-access-m9b9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.654033 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35322d28-daef-4619-bd63-2baaa9ae46bd-logs" (OuterVolumeSpecName: "logs") pod "35322d28-daef-4619-bd63-2baaa9ae46bd" (UID: "35322d28-daef-4619-bd63-2baaa9ae46bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.654331 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35322d28-daef-4619-bd63-2baaa9ae46bd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "35322d28-daef-4619-bd63-2baaa9ae46bd" (UID: "35322d28-daef-4619-bd63-2baaa9ae46bd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.654510 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb99423-f394-4f95-9279-13f68e394e4f-kube-api-access-9f4cd" (OuterVolumeSpecName: "kube-api-access-9f4cd") pod "1eb99423-f394-4f95-9279-13f68e394e4f" (UID: "1eb99423-f394-4f95-9279-13f68e394e4f"). InnerVolumeSpecName "kube-api-access-9f4cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.662775 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb99423-f394-4f95-9279-13f68e394e4f-scripts" (OuterVolumeSpecName: "scripts") pod "1eb99423-f394-4f95-9279-13f68e394e4f" (UID: "1eb99423-f394-4f95-9279-13f68e394e4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.665051 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4531da4-441c-4003-9f20-719853edb0b4-kube-api-access-np5hh" (OuterVolumeSpecName: "kube-api-access-np5hh") pod "c4531da4-441c-4003-9f20-719853edb0b4" (UID: "c4531da4-441c-4003-9f20-719853edb0b4"). InnerVolumeSpecName "kube-api-access-np5hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.668729 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-kube-api-access-gl9gf" (OuterVolumeSpecName: "kube-api-access-gl9gf") pod "6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5" (UID: "6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5"). InnerVolumeSpecName "kube-api-access-gl9gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.672249 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5" (UID: "6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.681356 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb99423-f394-4f95-9279-13f68e394e4f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1eb99423-f394-4f95-9279-13f68e394e4f" (UID: "1eb99423-f394-4f95-9279-13f68e394e4f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.693978 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-scripts" (OuterVolumeSpecName: "scripts") pod "35322d28-daef-4619-bd63-2baaa9ae46bd" (UID: "35322d28-daef-4619-bd63-2baaa9ae46bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.700129 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35322d28-daef-4619-bd63-2baaa9ae46bd-kube-api-access-hbwh7" (OuterVolumeSpecName: "kube-api-access-hbwh7") pod "35322d28-daef-4619-bd63-2baaa9ae46bd" (UID: "35322d28-daef-4619-bd63-2baaa9ae46bd"). InnerVolumeSpecName "kube-api-access-hbwh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.715419 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4531da4-441c-4003-9f20-719853edb0b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4531da4-441c-4003-9f20-719853edb0b4" (UID: "c4531da4-441c-4003-9f20-719853edb0b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.716737 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4531da4-441c-4003-9f20-719853edb0b4-config" (OuterVolumeSpecName: "config") pod "c4531da4-441c-4003-9f20-719853edb0b4" (UID: "c4531da4-441c-4003-9f20-719853edb0b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.737066 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35322d28-daef-4619-bd63-2baaa9ae46bd" (UID: "35322d28-daef-4619-bd63-2baaa9ae46bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.737792 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-combined-ca-bundle\") pod \"35322d28-daef-4619-bd63-2baaa9ae46bd\" (UID: \"35322d28-daef-4619-bd63-2baaa9ae46bd\") " Dec 01 09:51:48 crc kubenswrapper[4933]: W1201 09:51:48.738433 4933 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/35322d28-daef-4619-bd63-2baaa9ae46bd/volumes/kubernetes.io~secret/combined-ca-bundle Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.738469 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35322d28-daef-4619-bd63-2baaa9ae46bd" (UID: "35322d28-daef-4619-bd63-2baaa9ae46bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739042 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1eb99423-f394-4f95-9279-13f68e394e4f-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739072 4933 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35322d28-daef-4619-bd63-2baaa9ae46bd-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739084 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np5hh\" (UniqueName: \"kubernetes.io/projected/c4531da4-441c-4003-9f20-719853edb0b4-kube-api-access-np5hh\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739098 4933 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739110 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739096 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4696ea4a-1bc2-4e1b-9209-67beddf255e8" (UID: "4696ea4a-1bc2-4e1b-9209-67beddf255e8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739122 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl9gf\" (UniqueName: \"kubernetes.io/projected/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-kube-api-access-gl9gf\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739188 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4531da4-441c-4003-9f20-719853edb0b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739202 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739214 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739242 4933 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739262 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9b9g\" (UniqueName: \"kubernetes.io/projected/4696ea4a-1bc2-4e1b-9209-67beddf255e8-kube-api-access-m9b9g\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739273 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4531da4-441c-4003-9f20-719853edb0b4-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739285 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1eb99423-f394-4f95-9279-13f68e394e4f-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739295 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1eb99423-f394-4f95-9279-13f68e394e4f-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739323 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f4cd\" (UniqueName: \"kubernetes.io/projected/1eb99423-f394-4f95-9279-13f68e394e4f-kube-api-access-9f4cd\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739334 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739345 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739359 4933 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1eb99423-f394-4f95-9279-13f68e394e4f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739371 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35322d28-daef-4619-bd63-2baaa9ae46bd-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.739383 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbwh7\" (UniqueName: \"kubernetes.io/projected/35322d28-daef-4619-bd63-2baaa9ae46bd-kube-api-access-hbwh7\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.740567 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4696ea4a-1bc2-4e1b-9209-67beddf255e8" (UID: "4696ea4a-1bc2-4e1b-9209-67beddf255e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.744361 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-config-data" (OuterVolumeSpecName: "config-data") pod "35322d28-daef-4619-bd63-2baaa9ae46bd" (UID: "35322d28-daef-4619-bd63-2baaa9ae46bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.753966 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4696ea4a-1bc2-4e1b-9209-67beddf255e8" (UID: "4696ea4a-1bc2-4e1b-9209-67beddf255e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.759553 4933 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.778959 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-config" (OuterVolumeSpecName: "config") pod "4696ea4a-1bc2-4e1b-9209-67beddf255e8" (UID: "4696ea4a-1bc2-4e1b-9209-67beddf255e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.843675 4933 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.843747 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.843766 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.843781 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.843796 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35322d28-daef-4619-bd63-2baaa9ae46bd-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.843818 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4696ea4a-1bc2-4e1b-9209-67beddf255e8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.961596 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35322d28-daef-4619-bd63-2baaa9ae46bd","Type":"ContainerDied","Data":"b4a01308dab79f4e6f9186db2fa32f3b8e180372f1c8ea6dffdaa2b9782ee53c"} Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.961633 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.965831 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74ddcb8d87-g7ljk" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.965827 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74ddcb8d87-g7ljk" event={"ID":"1eb99423-f394-4f95-9279-13f68e394e4f","Type":"ContainerDied","Data":"102a6c5d762746d777d0b08b286dc6c161a409c936f6074918fd7266b0ea517d"} Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.968346 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-768dbf8f5c-4lmql" event={"ID":"6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5","Type":"ContainerDied","Data":"eb669341880b4b91feacea450e6e7493a5c1a8adc05bda1e32ad99bbff200dc0"} Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.968477 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-768dbf8f5c-4lmql" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.972431 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7blxn" event={"ID":"4696ea4a-1bc2-4e1b-9209-67beddf255e8","Type":"ContainerDied","Data":"bf051854a8ddcbca2e2e53ae3b67e54860e646109b47733fb2457e2b7e1a2faa"} Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.972500 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7blxn" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.979987 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ltlmj" Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.981608 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ltlmj" event={"ID":"c4531da4-441c-4003-9f20-719853edb0b4","Type":"ContainerDied","Data":"7e7454e32a4438a5d84b7fbc0fb04c9e1198ee597a8fe3e6d498425796a02f1a"} Dec 01 09:51:48 crc kubenswrapper[4933]: I1201 09:51:48.981682 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e7454e32a4438a5d84b7fbc0fb04c9e1198ee597a8fe3e6d498425796a02f1a" Dec 01 09:51:48 crc kubenswrapper[4933]: E1201 09:51:48.996553 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-qw69v" podUID="82f89d96-ceb5-4012-9273-68d00cb0780b" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:48.998226 4933 scope.go:117] "RemoveContainer" containerID="c1d6c29ce097233dfdc33730604818878fc30c1b3e473150d02441c5f193f110" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.310955 4933 scope.go:117] "RemoveContainer" containerID="94d2bc51c4734946e558f8ee2fba0364622c9c0d818fb15230a3163bcd8690da" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.347802 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74ddcb8d87-g7ljk"] Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.367822 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-74ddcb8d87-g7ljk"] Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.377854 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.386271 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.393903 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7blxn"] Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.415649 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7blxn"] Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.439120 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:51:49 crc kubenswrapper[4933]: E1201 09:51:49.440074 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4531da4-441c-4003-9f20-719853edb0b4" containerName="neutron-db-sync" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.440115 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4531da4-441c-4003-9f20-719853edb0b4" containerName="neutron-db-sync" Dec 01 09:51:49 crc kubenswrapper[4933]: E1201 09:51:49.440146 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4696ea4a-1bc2-4e1b-9209-67beddf255e8" containerName="dnsmasq-dns" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.440154 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="4696ea4a-1bc2-4e1b-9209-67beddf255e8" containerName="dnsmasq-dns" Dec 01 09:51:49 crc kubenswrapper[4933]: E1201 09:51:49.440171 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35322d28-daef-4619-bd63-2baaa9ae46bd" containerName="glance-log" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.440179 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="35322d28-daef-4619-bd63-2baaa9ae46bd" containerName="glance-log" Dec 01 09:51:49 crc kubenswrapper[4933]: E1201 09:51:49.440202 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4696ea4a-1bc2-4e1b-9209-67beddf255e8" containerName="init" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.440208 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="4696ea4a-1bc2-4e1b-9209-67beddf255e8" containerName="init" Dec 01 09:51:49 crc kubenswrapper[4933]: E1201 09:51:49.440955 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35322d28-daef-4619-bd63-2baaa9ae46bd" containerName="glance-httpd" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.440966 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="35322d28-daef-4619-bd63-2baaa9ae46bd" containerName="glance-httpd" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.441193 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4531da4-441c-4003-9f20-719853edb0b4" containerName="neutron-db-sync" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.441210 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="4696ea4a-1bc2-4e1b-9209-67beddf255e8" containerName="dnsmasq-dns" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.441225 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="35322d28-daef-4619-bd63-2baaa9ae46bd" containerName="glance-log" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.441236 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="35322d28-daef-4619-bd63-2baaa9ae46bd" containerName="glance-httpd" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.443164 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.443943 4933 scope.go:117] "RemoveContainer" containerID="f190ad7a0509497102623ab03b5b417ccd617a5d156ac1c4364a03c58e71f35b" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.446902 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.447219 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.473551 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.498502 4933 scope.go:117] "RemoveContainer" containerID="381a2f73e98940958a70a494e091cbc33ed05279cb6bb15e5418fe0fc4ffcd89" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.511003 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-768dbf8f5c-4lmql"] Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.521047 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-768dbf8f5c-4lmql"] Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.552696 4933 scope.go:117] "RemoveContainer" containerID="3bd15b55c6aadbc003639be05b6b40ba850298791fb4361fc13e3e372b586ad9" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.565173 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e5be60e-fe89-43a0-b914-afbf646a6888-logs\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.565229 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.565290 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.565378 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e5be60e-fe89-43a0-b914-afbf646a6888-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.565406 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.565456 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.565512 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsql4\" (UniqueName: \"kubernetes.io/projected/2e5be60e-fe89-43a0-b914-afbf646a6888-kube-api-access-nsql4\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.565536 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.712156 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e5be60e-fe89-43a0-b914-afbf646a6888-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.712227 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.712280 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.712481 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.712524 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsql4\" (UniqueName: \"kubernetes.io/projected/2e5be60e-fe89-43a0-b914-afbf646a6888-kube-api-access-nsql4\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.712863 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e5be60e-fe89-43a0-b914-afbf646a6888-logs\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.712943 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.713059 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.713984 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e5be60e-fe89-43a0-b914-afbf646a6888-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.715708 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.718435 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e5be60e-fe89-43a0-b914-afbf646a6888-logs\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.724699 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.725110 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.730545 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.736780 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.737049 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.747598 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb99423-f394-4f95-9279-13f68e394e4f" path="/var/lib/kubelet/pods/1eb99423-f394-4f95-9279-13f68e394e4f/volumes" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.748564 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35322d28-daef-4619-bd63-2baaa9ae46bd" path="/var/lib/kubelet/pods/35322d28-daef-4619-bd63-2baaa9ae46bd/volumes" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.781386 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsql4\" (UniqueName: \"kubernetes.io/projected/2e5be60e-fe89-43a0-b914-afbf646a6888-kube-api-access-nsql4\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.787321 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4696ea4a-1bc2-4e1b-9209-67beddf255e8" path="/var/lib/kubelet/pods/4696ea4a-1bc2-4e1b-9209-67beddf255e8/volumes" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.787791 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.789388 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5" path="/var/lib/kubelet/pods/6c3bff5c-fb60-4d5d-a555-174c4d7ce1a5/volumes" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.791123 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09596af-78c2-4f79-8f0b-121ef7f9ef9a" path="/var/lib/kubelet/pods/c09596af-78c2-4f79-8f0b-121ef7f9ef9a/volumes" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.801640 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6775f97bdb-vs7m8"] Dec 01 09:51:49 crc kubenswrapper[4933]: E1201 09:51:49.809573 4933 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4531da4_441c_4003_9f20_719853edb0b4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c3bff5c_fb60_4d5d_a555_174c4d7ce1a5.slice\": RecentStats: unable to find data in memory cache]" Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.815481 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:51:49 crc kubenswrapper[4933]: W1201 09:51:49.839795 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod321a4d39_7ce4_4385_a6f5_5204da92683b.slice/crio-f2b78bb74a279fa341a62250e037739856a0a753135e82374d89c636a8130354 WatchSource:0}: Error finding container f2b78bb74a279fa341a62250e037739856a0a753135e82374d89c636a8130354: Status 404 returned error can't find the container with id f2b78bb74a279fa341a62250e037739856a0a753135e82374d89c636a8130354 Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.843539 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cstm4"] Dec 01 09:51:49 crc kubenswrapper[4933]: I1201 09:51:49.862953 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.007224 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cstm4" event={"ID":"321a4d39-7ce4-4385-a6f5-5204da92683b","Type":"ContainerStarted","Data":"f2b78bb74a279fa341a62250e037739856a0a753135e82374d89c636a8130354"} Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.008801 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.019382 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6775f97bdb-vs7m8" event={"ID":"6ffcfb41-8086-4e28-b88a-da47dd38a844","Type":"ContainerStarted","Data":"e4f29358548baa73d85fb9c708be57e634954ae831577c5b450d6fa02614debd"} Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.040115 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75479c6864-2fvz5"] Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.049112 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8lwh6" event={"ID":"7853c81a-e365-473f-a4a7-4fcc87f625cd","Type":"ContainerStarted","Data":"e102516c2e0995cd1f33d56d0b19c9ebd5259d37f1539b1d52bb816fcc61bf83"} Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.083932 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea","Type":"ContainerStarted","Data":"439c574e9494f6a21f125d27c449e302b3eb1f97678a19d6e808352986326108"} Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.091171 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.094845 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zddv7"] Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.106523 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.129225 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zddv7"] Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.140444 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8lwh6" podStartSLOduration=4.79775483 podStartE2EDuration="34.140413844s" podCreationTimestamp="2025-12-01 09:51:16 +0000 UTC" firstStartedPulling="2025-12-01 09:51:19.120368573 +0000 UTC m=+1169.762092188" lastFinishedPulling="2025-12-01 09:51:48.463027587 +0000 UTC m=+1199.104751202" observedRunningTime="2025-12-01 09:51:50.121885141 +0000 UTC m=+1200.763608776" watchObservedRunningTime="2025-12-01 09:51:50.140413844 +0000 UTC m=+1200.782137459" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.228994 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.229049 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.229131 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-config\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.229162 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsvpc\" (UniqueName: \"kubernetes.io/projected/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-kube-api-access-dsvpc\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.229240 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.229256 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.270222 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66cbb488fb-wvfw6"] Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.275022 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.286178 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.286642 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4n754" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.292926 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.294911 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66cbb488fb-wvfw6"] Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.300801 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.332984 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.333036 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.333089 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.333124 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.333196 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-config\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.333231 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsvpc\" (UniqueName: \"kubernetes.io/projected/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-kube-api-access-dsvpc\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.334796 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.337324 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.337946 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.338572 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-config\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.351342 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.392551 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsvpc\" (UniqueName: \"kubernetes.io/projected/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-kube-api-access-dsvpc\") pod \"dnsmasq-dns-55f844cf75-zddv7\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.435792 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-httpd-config\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.435905 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-config\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.435931 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2qgz\" (UniqueName: \"kubernetes.io/projected/ecffe4c0-9de1-418c-b57e-d0484fb89482-kube-api-access-n2qgz\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.435957 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-ovndb-tls-certs\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.435992 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-combined-ca-bundle\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.531489 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.537484 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-config\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.538205 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2qgz\" (UniqueName: \"kubernetes.io/projected/ecffe4c0-9de1-418c-b57e-d0484fb89482-kube-api-access-n2qgz\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.538246 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-ovndb-tls-certs\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.538287 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-combined-ca-bundle\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.539228 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-httpd-config\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.543137 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-config\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.554279 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-ovndb-tls-certs\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.555799 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-httpd-config\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.560652 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-combined-ca-bundle\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.593395 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2qgz\" (UniqueName: \"kubernetes.io/projected/ecffe4c0-9de1-418c-b57e-d0484fb89482-kube-api-access-n2qgz\") pod \"neutron-66cbb488fb-wvfw6\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:50 crc kubenswrapper[4933]: I1201 09:51:50.688880 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:51 crc kubenswrapper[4933]: I1201 09:51:51.116072 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75479c6864-2fvz5" event={"ID":"000656f6-99fd-43a3-8ade-31b200d0c18a","Type":"ContainerStarted","Data":"cd4cbee8d6d537a4aaacae0f102f4aeec7d9132b154b948b8e20268d913c03d6"} Dec 01 09:51:51 crc kubenswrapper[4933]: I1201 09:51:51.121359 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cstm4" event={"ID":"321a4d39-7ce4-4385-a6f5-5204da92683b","Type":"ContainerStarted","Data":"3d701ac010eab170108e72d514195cdeb1835b69da5d15fc2434388456d80f5c"} Dec 01 09:51:51 crc kubenswrapper[4933]: I1201 09:51:51.139294 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"02e5b782-8d28-4206-aeb2-a9f1976abc8f","Type":"ContainerStarted","Data":"5a4710bad4a9e244f24e0b209252004d33ede45ae3f23d26c040208c8de4242b"} Dec 01 09:51:51 crc kubenswrapper[4933]: I1201 09:51:51.200395 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:51:51 crc kubenswrapper[4933]: I1201 09:51:51.230064 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cstm4" podStartSLOduration=14.230032188 podStartE2EDuration="14.230032188s" podCreationTimestamp="2025-12-01 09:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:51.15336362 +0000 UTC m=+1201.795087255" watchObservedRunningTime="2025-12-01 09:51:51.230032188 +0000 UTC m=+1201.871755803" Dec 01 09:51:51 crc kubenswrapper[4933]: I1201 09:51:51.524257 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zddv7"] Dec 01 09:51:51 crc kubenswrapper[4933]: I1201 09:51:51.857931 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66cbb488fb-wvfw6"] Dec 01 09:51:52 crc kubenswrapper[4933]: I1201 09:51:52.173527 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cbb488fb-wvfw6" event={"ID":"ecffe4c0-9de1-418c-b57e-d0484fb89482","Type":"ContainerStarted","Data":"758461133e6ac20b1fdd3d92a9a9627415e40418dcb6ae152556415904707280"} Dec 01 09:51:52 crc kubenswrapper[4933]: I1201 09:51:52.177742 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zddv7" event={"ID":"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d","Type":"ContainerStarted","Data":"ae7d2e19857cdcaecbc89f76cdd5c911e95901a9d59b6699c4564d560e0bf657"} Dec 01 09:51:52 crc kubenswrapper[4933]: I1201 09:51:52.183532 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75479c6864-2fvz5" event={"ID":"000656f6-99fd-43a3-8ade-31b200d0c18a","Type":"ContainerStarted","Data":"1160013da08ee75c7c8e797e0ae1954be47f186459a81de5a80b7e4e58d7a6e8"} Dec 01 09:51:52 crc kubenswrapper[4933]: I1201 09:51:52.183600 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75479c6864-2fvz5" event={"ID":"000656f6-99fd-43a3-8ade-31b200d0c18a","Type":"ContainerStarted","Data":"4362e35636a9dc639e6b8c0cde46d3c048aa463a5132d317a170092b7190510a"} Dec 01 09:51:52 crc kubenswrapper[4933]: I1201 09:51:52.199067 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e5be60e-fe89-43a0-b914-afbf646a6888","Type":"ContainerStarted","Data":"6bdd545539f296903513e10cc1777f74d9ac84b27f971a3694756f4888e3af8e"} Dec 01 09:51:52 crc kubenswrapper[4933]: I1201 09:51:52.205624 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6775f97bdb-vs7m8" event={"ID":"6ffcfb41-8086-4e28-b88a-da47dd38a844","Type":"ContainerStarted","Data":"0874941382f63e0c82cae2a5dc94543aa96b15309c19c42f17c114109f943016"} Dec 01 09:51:52 crc kubenswrapper[4933]: I1201 09:51:52.205688 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6775f97bdb-vs7m8" event={"ID":"6ffcfb41-8086-4e28-b88a-da47dd38a844","Type":"ContainerStarted","Data":"e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0"} Dec 01 09:51:52 crc kubenswrapper[4933]: I1201 09:51:52.213525 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7blxn" podUID="4696ea4a-1bc2-4e1b-9209-67beddf255e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: i/o timeout" Dec 01 09:51:52 crc kubenswrapper[4933]: I1201 09:51:52.223584 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"02e5b782-8d28-4206-aeb2-a9f1976abc8f","Type":"ContainerStarted","Data":"0dc2aa5e473292afd7f608803c2e5b66794ae41a16b9b9b1e26565bd52e3ab1b"} Dec 01 09:51:52 crc kubenswrapper[4933]: I1201 09:51:52.224738 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75479c6864-2fvz5" podStartSLOduration=23.261427356 podStartE2EDuration="24.224700436s" podCreationTimestamp="2025-12-01 09:51:28 +0000 UTC" firstStartedPulling="2025-12-01 09:51:50.064496415 +0000 UTC m=+1200.706220030" lastFinishedPulling="2025-12-01 09:51:51.027769495 +0000 UTC m=+1201.669493110" observedRunningTime="2025-12-01 09:51:52.21872384 +0000 UTC m=+1202.860447455" watchObservedRunningTime="2025-12-01 09:51:52.224700436 +0000 UTC m=+1202.866424061" Dec 01 09:51:52 crc kubenswrapper[4933]: I1201 09:51:52.253997 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6775f97bdb-vs7m8" podStartSLOduration=23.363556477 podStartE2EDuration="24.253980723s" podCreationTimestamp="2025-12-01 09:51:28 +0000 UTC" firstStartedPulling="2025-12-01 09:51:49.784615441 +0000 UTC m=+1200.426339056" lastFinishedPulling="2025-12-01 09:51:50.675039687 +0000 UTC m=+1201.316763302" observedRunningTime="2025-12-01 09:51:52.251663496 +0000 UTC m=+1202.893387111" watchObservedRunningTime="2025-12-01 09:51:52.253980723 +0000 UTC m=+1202.895704338" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.267558 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e5be60e-fe89-43a0-b914-afbf646a6888","Type":"ContainerStarted","Data":"7fbdefa9b331b64acda196f42d688581c0f552728712316aaca99f6f0ff1d6e8"} Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.288316 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cbb488fb-wvfw6" event={"ID":"ecffe4c0-9de1-418c-b57e-d0484fb89482","Type":"ContainerStarted","Data":"ad68bf3884fdf1eba62ef989dcb28890bf939cf689eba4b31551a91adbb028e9"} Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.288395 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cbb488fb-wvfw6" event={"ID":"ecffe4c0-9de1-418c-b57e-d0484fb89482","Type":"ContainerStarted","Data":"b39c4bcf1ee7c55db12b2b7f455dc62fc933e108cf52cde8ca2d93d667fde979"} Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.288418 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.291377 4933 generic.go:334] "Generic (PLEG): container finished" podID="35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" containerID="9e19d89474e22a0bb8d1f0a3b7197733be223be8be825d8da964ea2c91f8db4f" exitCode=0 Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.291500 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zddv7" event={"ID":"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d","Type":"ContainerDied","Data":"9e19d89474e22a0bb8d1f0a3b7197733be223be8be825d8da964ea2c91f8db4f"} Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.374074 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66cbb488fb-wvfw6" podStartSLOduration=3.373930219 podStartE2EDuration="3.373930219s" podCreationTimestamp="2025-12-01 09:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:53.373015887 +0000 UTC m=+1204.014739502" watchObservedRunningTime="2025-12-01 09:51:53.373930219 +0000 UTC m=+1204.015653834" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.549687 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5dd758bcf-r4prx"] Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.567069 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.587906 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.588213 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.596319 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dd758bcf-r4prx"] Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.667163 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-httpd-config\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.667292 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-combined-ca-bundle\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.667337 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-public-tls-certs\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.667417 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg5xq\" (UniqueName: \"kubernetes.io/projected/39d17922-6634-497e-9dab-330fcbde16fe-kube-api-access-lg5xq\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.667491 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-ovndb-tls-certs\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.667570 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-config\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.667604 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-internal-tls-certs\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.770089 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg5xq\" (UniqueName: \"kubernetes.io/projected/39d17922-6634-497e-9dab-330fcbde16fe-kube-api-access-lg5xq\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.770217 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-ovndb-tls-certs\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.770285 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-config\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.770334 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-internal-tls-certs\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.770365 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-httpd-config\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.770434 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-combined-ca-bundle\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.770454 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-public-tls-certs\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.790150 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-httpd-config\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.849041 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-ovndb-tls-certs\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.852662 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-combined-ca-bundle\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.856248 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-internal-tls-certs\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.856762 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-public-tls-certs\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.857006 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg5xq\" (UniqueName: \"kubernetes.io/projected/39d17922-6634-497e-9dab-330fcbde16fe-kube-api-access-lg5xq\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:53 crc kubenswrapper[4933]: I1201 09:51:53.865845 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/39d17922-6634-497e-9dab-330fcbde16fe-config\") pod \"neutron-5dd758bcf-r4prx\" (UID: \"39d17922-6634-497e-9dab-330fcbde16fe\") " pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:54 crc kubenswrapper[4933]: I1201 09:51:54.051017 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:51:54 crc kubenswrapper[4933]: I1201 09:51:54.363791 4933 generic.go:334] "Generic (PLEG): container finished" podID="7853c81a-e365-473f-a4a7-4fcc87f625cd" containerID="e102516c2e0995cd1f33d56d0b19c9ebd5259d37f1539b1d52bb816fcc61bf83" exitCode=0 Dec 01 09:51:54 crc kubenswrapper[4933]: I1201 09:51:54.364350 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8lwh6" event={"ID":"7853c81a-e365-473f-a4a7-4fcc87f625cd","Type":"ContainerDied","Data":"e102516c2e0995cd1f33d56d0b19c9ebd5259d37f1539b1d52bb816fcc61bf83"} Dec 01 09:51:54 crc kubenswrapper[4933]: I1201 09:51:54.384428 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e5be60e-fe89-43a0-b914-afbf646a6888","Type":"ContainerStarted","Data":"5c522b75ceeeaee2e8506acc018bdcf23120b042987be2fe9a13e94298e082a4"} Dec 01 09:51:54 crc kubenswrapper[4933]: I1201 09:51:54.404749 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"02e5b782-8d28-4206-aeb2-a9f1976abc8f","Type":"ContainerStarted","Data":"71af3c1e997347f5888993bda334f7ecd26ddb62e1e0925557a88be906a17ba3"} Dec 01 09:51:54 crc kubenswrapper[4933]: I1201 09:51:54.452415 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=17.452394399 podStartE2EDuration="17.452394399s" podCreationTimestamp="2025-12-01 09:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:54.447446938 +0000 UTC m=+1205.089170553" watchObservedRunningTime="2025-12-01 09:51:54.452394399 +0000 UTC m=+1205.094118014" Dec 01 09:51:54 crc kubenswrapper[4933]: I1201 09:51:54.976499 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dd758bcf-r4prx"] Dec 01 09:51:54 crc kubenswrapper[4933]: W1201 09:51:54.997765 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39d17922_6634_497e_9dab_330fcbde16fe.slice/crio-5c65268e221b9bfb330157a5ae919e56aa72e38488173777714568969f64673d WatchSource:0}: Error finding container 5c65268e221b9bfb330157a5ae919e56aa72e38488173777714568969f64673d: Status 404 returned error can't find the container with id 5c65268e221b9bfb330157a5ae919e56aa72e38488173777714568969f64673d Dec 01 09:51:55 crc kubenswrapper[4933]: I1201 09:51:55.413252 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dd758bcf-r4prx" event={"ID":"39d17922-6634-497e-9dab-330fcbde16fe","Type":"ContainerStarted","Data":"5c65268e221b9bfb330157a5ae919e56aa72e38488173777714568969f64673d"} Dec 01 09:51:57 crc kubenswrapper[4933]: I1201 09:51:57.437056 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zddv7" event={"ID":"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d","Type":"ContainerStarted","Data":"16aeea625e8805f7e04e40c09f7e2e48d8f49c14ccad342d7a09212f30db7586"} Dec 01 09:51:57 crc kubenswrapper[4933]: I1201 09:51:57.544383 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 09:51:57 crc kubenswrapper[4933]: I1201 09:51:57.544459 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 09:51:57 crc kubenswrapper[4933]: I1201 09:51:57.595654 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 09:51:57 crc kubenswrapper[4933]: I1201 09:51:57.596176 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.445713 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.446431 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.469816 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.46979423 podStartE2EDuration="9.46979423s" podCreationTimestamp="2025-12-01 09:51:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:58.464777467 +0000 UTC m=+1209.106501082" watchObservedRunningTime="2025-12-01 09:51:58.46979423 +0000 UTC m=+1209.111517845" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.492911 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-zddv7" podStartSLOduration=8.492876876 podStartE2EDuration="8.492876876s" podCreationTimestamp="2025-12-01 09:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:51:58.48654576 +0000 UTC m=+1209.128269385" watchObservedRunningTime="2025-12-01 09:51:58.492876876 +0000 UTC m=+1209.134600491" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.776984 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.817236 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-combined-ca-bundle\") pod \"7853c81a-e365-473f-a4a7-4fcc87f625cd\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.817424 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7853c81a-e365-473f-a4a7-4fcc87f625cd-logs\") pod \"7853c81a-e365-473f-a4a7-4fcc87f625cd\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.817466 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-config-data\") pod \"7853c81a-e365-473f-a4a7-4fcc87f625cd\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.817576 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-scripts\") pod \"7853c81a-e365-473f-a4a7-4fcc87f625cd\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.817646 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqcdz\" (UniqueName: \"kubernetes.io/projected/7853c81a-e365-473f-a4a7-4fcc87f625cd-kube-api-access-bqcdz\") pod \"7853c81a-e365-473f-a4a7-4fcc87f625cd\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.818069 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7853c81a-e365-473f-a4a7-4fcc87f625cd-logs" (OuterVolumeSpecName: "logs") pod "7853c81a-e365-473f-a4a7-4fcc87f625cd" (UID: "7853c81a-e365-473f-a4a7-4fcc87f625cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.818762 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7853c81a-e365-473f-a4a7-4fcc87f625cd-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.828797 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-scripts" (OuterVolumeSpecName: "scripts") pod "7853c81a-e365-473f-a4a7-4fcc87f625cd" (UID: "7853c81a-e365-473f-a4a7-4fcc87f625cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.829710 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7853c81a-e365-473f-a4a7-4fcc87f625cd-kube-api-access-bqcdz" (OuterVolumeSpecName: "kube-api-access-bqcdz") pod "7853c81a-e365-473f-a4a7-4fcc87f625cd" (UID: "7853c81a-e365-473f-a4a7-4fcc87f625cd"). InnerVolumeSpecName "kube-api-access-bqcdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.924826 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7853c81a-e365-473f-a4a7-4fcc87f625cd" (UID: "7853c81a-e365-473f-a4a7-4fcc87f625cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.925840 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-combined-ca-bundle\") pod \"7853c81a-e365-473f-a4a7-4fcc87f625cd\" (UID: \"7853c81a-e365-473f-a4a7-4fcc87f625cd\") " Dec 01 09:51:58 crc kubenswrapper[4933]: W1201 09:51:58.926231 4933 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7853c81a-e365-473f-a4a7-4fcc87f625cd/volumes/kubernetes.io~secret/combined-ca-bundle Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.926265 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7853c81a-e365-473f-a4a7-4fcc87f625cd" (UID: "7853c81a-e365-473f-a4a7-4fcc87f625cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.926976 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.927007 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqcdz\" (UniqueName: \"kubernetes.io/projected/7853c81a-e365-473f-a4a7-4fcc87f625cd-kube-api-access-bqcdz\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.927024 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:58 crc kubenswrapper[4933]: I1201 09:51:58.936013 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-config-data" (OuterVolumeSpecName: "config-data") pod "7853c81a-e365-473f-a4a7-4fcc87f625cd" (UID: "7853c81a-e365-473f-a4a7-4fcc87f625cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:51:59 crc kubenswrapper[4933]: I1201 09:51:59.031463 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7853c81a-e365-473f-a4a7-4fcc87f625cd-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:51:59 crc kubenswrapper[4933]: I1201 09:51:59.060418 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:59 crc kubenswrapper[4933]: I1201 09:51:59.060487 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:51:59 crc kubenswrapper[4933]: I1201 09:51:59.135799 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:59 crc kubenswrapper[4933]: I1201 09:51:59.137035 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:51:59 crc kubenswrapper[4933]: I1201 09:51:59.460844 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8lwh6" event={"ID":"7853c81a-e365-473f-a4a7-4fcc87f625cd","Type":"ContainerDied","Data":"dd8b57b327d99b7f517c48c820c9b4b005ed86fde35b7948a77fb030e6fb9ae5"} Dec 01 09:51:59 crc kubenswrapper[4933]: I1201 09:51:59.460889 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd8b57b327d99b7f517c48c820c9b4b005ed86fde35b7948a77fb030e6fb9ae5" Dec 01 09:51:59 crc kubenswrapper[4933]: I1201 09:51:59.460953 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8lwh6" Dec 01 09:51:59 crc kubenswrapper[4933]: I1201 09:51:59.469218 4933 generic.go:334] "Generic (PLEG): container finished" podID="321a4d39-7ce4-4385-a6f5-5204da92683b" containerID="3d701ac010eab170108e72d514195cdeb1835b69da5d15fc2434388456d80f5c" exitCode=0 Dec 01 09:51:59 crc kubenswrapper[4933]: I1201 09:51:59.470184 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cstm4" event={"ID":"321a4d39-7ce4-4385-a6f5-5204da92683b","Type":"ContainerDied","Data":"3d701ac010eab170108e72d514195cdeb1835b69da5d15fc2434388456d80f5c"} Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.004393 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-55788c59f6-zd5sp"] Dec 01 09:52:00 crc kubenswrapper[4933]: E1201 09:52:00.005521 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7853c81a-e365-473f-a4a7-4fcc87f625cd" containerName="placement-db-sync" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.005547 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="7853c81a-e365-473f-a4a7-4fcc87f625cd" containerName="placement-db-sync" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.005775 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="7853c81a-e365-473f-a4a7-4fcc87f625cd" containerName="placement-db-sync" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.007086 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.012056 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.012435 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9pw84" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.012705 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.022195 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.023141 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.040576 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55788c59f6-zd5sp"] Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.094693 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.094773 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.200030 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-public-tls-certs\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.200115 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-config-data\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.200230 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg4cw\" (UniqueName: \"kubernetes.io/projected/c2caece4-8b42-4e68-9a5d-096ef39b4120-kube-api-access-bg4cw\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.200278 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2caece4-8b42-4e68-9a5d-096ef39b4120-logs\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.200347 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-internal-tls-certs\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.200379 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-combined-ca-bundle\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.200402 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-scripts\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.270421 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.272562 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.301658 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg4cw\" (UniqueName: \"kubernetes.io/projected/c2caece4-8b42-4e68-9a5d-096ef39b4120-kube-api-access-bg4cw\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.301723 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2caece4-8b42-4e68-9a5d-096ef39b4120-logs\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.301769 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-internal-tls-certs\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.301800 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-combined-ca-bundle\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.301821 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-scripts\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.301848 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-public-tls-certs\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.301868 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-config-data\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.319845 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2caece4-8b42-4e68-9a5d-096ef39b4120-logs\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.320368 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-scripts\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.322180 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-combined-ca-bundle\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.322365 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-internal-tls-certs\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.328041 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-public-tls-certs\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.335622 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2caece4-8b42-4e68-9a5d-096ef39b4120-config-data\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.339010 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg4cw\" (UniqueName: \"kubernetes.io/projected/c2caece4-8b42-4e68-9a5d-096ef39b4120-kube-api-access-bg4cw\") pod \"placement-55788c59f6-zd5sp\" (UID: \"c2caece4-8b42-4e68-9a5d-096ef39b4120\") " pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.397940 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.486810 4933 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.486839 4933 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.487897 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.488206 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 09:52:00 crc kubenswrapper[4933]: I1201 09:52:00.532544 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:52:01 crc kubenswrapper[4933]: I1201 09:52:01.277405 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 09:52:01 crc kubenswrapper[4933]: I1201 09:52:01.286874 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 09:52:02 crc kubenswrapper[4933]: I1201 09:52:02.550247 4933 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 09:52:02 crc kubenswrapper[4933]: I1201 09:52:02.552255 4933 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 09:52:03 crc kubenswrapper[4933]: I1201 09:52:03.214659 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 09:52:03 crc kubenswrapper[4933]: I1201 09:52:03.242903 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 09:52:05 crc kubenswrapper[4933]: I1201 09:52:05.534584 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:52:05 crc kubenswrapper[4933]: I1201 09:52:05.804341 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ft7zj"] Dec 01 09:52:05 crc kubenswrapper[4933]: I1201 09:52:05.805144 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" podUID="ec93f2f8-8c54-443a-9560-4002b2b36a2c" containerName="dnsmasq-dns" containerID="cri-o://ee793ae4ee03be502f4c3729336120f518234ff27751b14bac5fbe36fe1476c7" gracePeriod=10 Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.581955 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.667113 4933 generic.go:334] "Generic (PLEG): container finished" podID="ec93f2f8-8c54-443a-9560-4002b2b36a2c" containerID="ee793ae4ee03be502f4c3729336120f518234ff27751b14bac5fbe36fe1476c7" exitCode=0 Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.667258 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" event={"ID":"ec93f2f8-8c54-443a-9560-4002b2b36a2c","Type":"ContainerDied","Data":"ee793ae4ee03be502f4c3729336120f518234ff27751b14bac5fbe36fe1476c7"} Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.673599 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cstm4" event={"ID":"321a4d39-7ce4-4385-a6f5-5204da92683b","Type":"ContainerDied","Data":"f2b78bb74a279fa341a62250e037739856a0a753135e82374d89c636a8130354"} Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.673655 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2b78bb74a279fa341a62250e037739856a0a753135e82374d89c636a8130354" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.673730 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cstm4" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.705498 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnlwq\" (UniqueName: \"kubernetes.io/projected/321a4d39-7ce4-4385-a6f5-5204da92683b-kube-api-access-nnlwq\") pod \"321a4d39-7ce4-4385-a6f5-5204da92683b\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.705636 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-combined-ca-bundle\") pod \"321a4d39-7ce4-4385-a6f5-5204da92683b\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.705821 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-scripts\") pod \"321a4d39-7ce4-4385-a6f5-5204da92683b\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.705854 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-fernet-keys\") pod \"321a4d39-7ce4-4385-a6f5-5204da92683b\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.705907 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-config-data\") pod \"321a4d39-7ce4-4385-a6f5-5204da92683b\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.705939 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-credential-keys\") pod \"321a4d39-7ce4-4385-a6f5-5204da92683b\" (UID: \"321a4d39-7ce4-4385-a6f5-5204da92683b\") " Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.717992 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "321a4d39-7ce4-4385-a6f5-5204da92683b" (UID: "321a4d39-7ce4-4385-a6f5-5204da92683b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.728741 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321a4d39-7ce4-4385-a6f5-5204da92683b-kube-api-access-nnlwq" (OuterVolumeSpecName: "kube-api-access-nnlwq") pod "321a4d39-7ce4-4385-a6f5-5204da92683b" (UID: "321a4d39-7ce4-4385-a6f5-5204da92683b"). InnerVolumeSpecName "kube-api-access-nnlwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.730539 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "321a4d39-7ce4-4385-a6f5-5204da92683b" (UID: "321a4d39-7ce4-4385-a6f5-5204da92683b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.730599 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-scripts" (OuterVolumeSpecName: "scripts") pod "321a4d39-7ce4-4385-a6f5-5204da92683b" (UID: "321a4d39-7ce4-4385-a6f5-5204da92683b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.759936 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "321a4d39-7ce4-4385-a6f5-5204da92683b" (UID: "321a4d39-7ce4-4385-a6f5-5204da92683b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.787433 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-config-data" (OuterVolumeSpecName: "config-data") pod "321a4d39-7ce4-4385-a6f5-5204da92683b" (UID: "321a4d39-7ce4-4385-a6f5-5204da92683b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.809643 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnlwq\" (UniqueName: \"kubernetes.io/projected/321a4d39-7ce4-4385-a6f5-5204da92683b-kube-api-access-nnlwq\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.809702 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.809722 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.809731 4933 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.809744 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:06 crc kubenswrapper[4933]: I1201 09:52:06.809757 4933 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/321a4d39-7ce4-4385-a6f5-5204da92683b-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.287610 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.422133 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfkzh\" (UniqueName: \"kubernetes.io/projected/ec93f2f8-8c54-443a-9560-4002b2b36a2c-kube-api-access-pfkzh\") pod \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.422627 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-config\") pod \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.422708 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-ovsdbserver-sb\") pod \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.422828 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-ovsdbserver-nb\") pod \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.422868 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-dns-svc\") pod \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.422926 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-dns-swift-storage-0\") pod \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\" (UID: \"ec93f2f8-8c54-443a-9560-4002b2b36a2c\") " Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.523398 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec93f2f8-8c54-443a-9560-4002b2b36a2c-kube-api-access-pfkzh" (OuterVolumeSpecName: "kube-api-access-pfkzh") pod "ec93f2f8-8c54-443a-9560-4002b2b36a2c" (UID: "ec93f2f8-8c54-443a-9560-4002b2b36a2c"). InnerVolumeSpecName "kube-api-access-pfkzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.525186 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfkzh\" (UniqueName: \"kubernetes.io/projected/ec93f2f8-8c54-443a-9560-4002b2b36a2c-kube-api-access-pfkzh\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.541084 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-config" (OuterVolumeSpecName: "config") pod "ec93f2f8-8c54-443a-9560-4002b2b36a2c" (UID: "ec93f2f8-8c54-443a-9560-4002b2b36a2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.561011 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec93f2f8-8c54-443a-9560-4002b2b36a2c" (UID: "ec93f2f8-8c54-443a-9560-4002b2b36a2c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.578533 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec93f2f8-8c54-443a-9560-4002b2b36a2c" (UID: "ec93f2f8-8c54-443a-9560-4002b2b36a2c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.619926 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec93f2f8-8c54-443a-9560-4002b2b36a2c" (UID: "ec93f2f8-8c54-443a-9560-4002b2b36a2c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.621939 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec93f2f8-8c54-443a-9560-4002b2b36a2c" (UID: "ec93f2f8-8c54-443a-9560-4002b2b36a2c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.628174 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.628475 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.628601 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.628680 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.628750 4933 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec93f2f8-8c54-443a-9560-4002b2b36a2c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.736079 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dd758bcf-r4prx" event={"ID":"39d17922-6634-497e-9dab-330fcbde16fe","Type":"ContainerStarted","Data":"ac0e85b1b422e0c19a93b67044ffe074bd134fe368a726f430547b2b7e438ff2"} Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.764377 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" event={"ID":"ec93f2f8-8c54-443a-9560-4002b2b36a2c","Type":"ContainerDied","Data":"c2281e0f10956439f5009e178800bfdfa7bb4eafa852b4c7d7364275d5c6d37d"} Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.764453 4933 scope.go:117] "RemoveContainer" containerID="ee793ae4ee03be502f4c3729336120f518234ff27751b14bac5fbe36fe1476c7" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.764633 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-ft7zj" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.790923 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6fd8c5dc6c-czndt"] Dec 01 09:52:07 crc kubenswrapper[4933]: E1201 09:52:07.791768 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec93f2f8-8c54-443a-9560-4002b2b36a2c" containerName="dnsmasq-dns" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.791791 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec93f2f8-8c54-443a-9560-4002b2b36a2c" containerName="dnsmasq-dns" Dec 01 09:52:07 crc kubenswrapper[4933]: E1201 09:52:07.791806 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec93f2f8-8c54-443a-9560-4002b2b36a2c" containerName="init" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.791813 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec93f2f8-8c54-443a-9560-4002b2b36a2c" containerName="init" Dec 01 09:52:07 crc kubenswrapper[4933]: E1201 09:52:07.791828 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321a4d39-7ce4-4385-a6f5-5204da92683b" containerName="keystone-bootstrap" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.791841 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="321a4d39-7ce4-4385-a6f5-5204da92683b" containerName="keystone-bootstrap" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.792086 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="321a4d39-7ce4-4385-a6f5-5204da92683b" containerName="keystone-bootstrap" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.792106 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec93f2f8-8c54-443a-9560-4002b2b36a2c" containerName="dnsmasq-dns" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.793101 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.798590 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.798976 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.798972 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mbpv6" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.799236 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.799545 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.799771 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.843097 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6fd8c5dc6c-czndt"] Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.909393 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55788c59f6-zd5sp"] Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.922631 4933 scope.go:117] "RemoveContainer" containerID="fbdcf3e4fa43eafc587ff9421099cefc4464f0f681cdcae2cfdcb76a2816a5b9" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.923753 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ft7zj"] Dec 01 09:52:07 crc kubenswrapper[4933]: W1201 09:52:07.937213 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2caece4_8b42_4e68_9a5d_096ef39b4120.slice/crio-1b94e7068e59ee22c50a2c4a335d1bef103e0149c43d074ed5ac0a2aa0aaf9e4 WatchSource:0}: Error finding container 1b94e7068e59ee22c50a2c4a335d1bef103e0149c43d074ed5ac0a2aa0aaf9e4: Status 404 returned error can't find the container with id 1b94e7068e59ee22c50a2c4a335d1bef103e0149c43d074ed5ac0a2aa0aaf9e4 Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.939127 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ft7zj"] Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.947658 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-combined-ca-bundle\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.947722 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-internal-tls-certs\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.947750 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-public-tls-certs\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.947784 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-scripts\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.947831 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl89f\" (UniqueName: \"kubernetes.io/projected/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-kube-api-access-pl89f\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.947909 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-fernet-keys\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.947945 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-config-data\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:07 crc kubenswrapper[4933]: I1201 09:52:07.947968 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-credential-keys\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.049786 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-config-data\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.049863 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-credential-keys\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.049917 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-combined-ca-bundle\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.049971 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-internal-tls-certs\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.050001 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-public-tls-certs\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.050049 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-scripts\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.050121 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl89f\" (UniqueName: \"kubernetes.io/projected/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-kube-api-access-pl89f\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.050207 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-fernet-keys\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.075541 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-credential-keys\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.076500 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-public-tls-certs\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.076733 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-scripts\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.077448 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-combined-ca-bundle\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.077493 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-config-data\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.078243 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-internal-tls-certs\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.079037 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-fernet-keys\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.079952 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl89f\" (UniqueName: \"kubernetes.io/projected/ea85bb6c-bf92-4f66-8068-8ccc7536bdb4-kube-api-access-pl89f\") pod \"keystone-6fd8c5dc6c-czndt\" (UID: \"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4\") " pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.200493 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.791995 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea","Type":"ContainerStarted","Data":"c98ca899829524f929e6a637495ae92006b1826cc73e11429a9284832e622a2d"} Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.803979 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n76m8" event={"ID":"4d3e5dc3-470a-4fa0-b17c-733457329c79","Type":"ContainerStarted","Data":"3d4c2bbba9101c46a70cff7bd598fcdd9e716f2433c63dc7330090d3f2eaa09c"} Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.824793 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55788c59f6-zd5sp" event={"ID":"c2caece4-8b42-4e68-9a5d-096ef39b4120","Type":"ContainerStarted","Data":"18dc1647d1ed8abc53e0936044c578c72afa7579cdf4bb65eb48d35093f78096"} Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.824876 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55788c59f6-zd5sp" event={"ID":"c2caece4-8b42-4e68-9a5d-096ef39b4120","Type":"ContainerStarted","Data":"1b94e7068e59ee22c50a2c4a335d1bef103e0149c43d074ed5ac0a2aa0aaf9e4"} Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.846594 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dd758bcf-r4prx" event={"ID":"39d17922-6634-497e-9dab-330fcbde16fe","Type":"ContainerStarted","Data":"1fa4cb96b1de09c0bc60bf854734f3de9224d92ff2f9fac465c4059aacd431ec"} Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.848032 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.851207 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6fd8c5dc6c-czndt"] Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.861082 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qw69v" event={"ID":"82f89d96-ceb5-4012-9273-68d00cb0780b","Type":"ContainerStarted","Data":"34b66823c89e2a4ba58921b9a111e53e35fdc1a2633c0cad1dca542f199791a0"} Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.873127 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-n76m8" podStartSLOduration=4.323391603 podStartE2EDuration="52.873050722s" podCreationTimestamp="2025-12-01 09:51:16 +0000 UTC" firstStartedPulling="2025-12-01 09:51:18.745830561 +0000 UTC m=+1169.387554166" lastFinishedPulling="2025-12-01 09:52:07.29548967 +0000 UTC m=+1217.937213285" observedRunningTime="2025-12-01 09:52:08.852765106 +0000 UTC m=+1219.494488721" watchObservedRunningTime="2025-12-01 09:52:08.873050722 +0000 UTC m=+1219.514774327" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.928109 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5dd758bcf-r4prx" podStartSLOduration=15.928080281 podStartE2EDuration="15.928080281s" podCreationTimestamp="2025-12-01 09:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:52:08.902288958 +0000 UTC m=+1219.544012573" watchObservedRunningTime="2025-12-01 09:52:08.928080281 +0000 UTC m=+1219.569803896" Dec 01 09:52:08 crc kubenswrapper[4933]: I1201 09:52:08.975728 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-qw69v" podStartSLOduration=4.506718464 podStartE2EDuration="52.975700107s" podCreationTimestamp="2025-12-01 09:51:16 +0000 UTC" firstStartedPulling="2025-12-01 09:51:18.78253764 +0000 UTC m=+1169.424261255" lastFinishedPulling="2025-12-01 09:52:07.251519283 +0000 UTC m=+1217.893242898" observedRunningTime="2025-12-01 09:52:08.942577825 +0000 UTC m=+1219.584301450" watchObservedRunningTime="2025-12-01 09:52:08.975700107 +0000 UTC m=+1219.617423722" Dec 01 09:52:09 crc kubenswrapper[4933]: I1201 09:52:09.064550 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6775f97bdb-vs7m8" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 01 09:52:09 crc kubenswrapper[4933]: I1201 09:52:09.139575 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75479c6864-2fvz5" podUID="000656f6-99fd-43a3-8ade-31b200d0c18a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Dec 01 09:52:09 crc kubenswrapper[4933]: I1201 09:52:09.683565 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec93f2f8-8c54-443a-9560-4002b2b36a2c" path="/var/lib/kubelet/pods/ec93f2f8-8c54-443a-9560-4002b2b36a2c/volumes" Dec 01 09:52:09 crc kubenswrapper[4933]: I1201 09:52:09.877332 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55788c59f6-zd5sp" event={"ID":"c2caece4-8b42-4e68-9a5d-096ef39b4120","Type":"ContainerStarted","Data":"0261749d9ebbbd64514e76400583bb79645a4fd104911e8e608f42f276b8ceaf"} Dec 01 09:52:09 crc kubenswrapper[4933]: I1201 09:52:09.877666 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:09 crc kubenswrapper[4933]: I1201 09:52:09.879395 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6fd8c5dc6c-czndt" event={"ID":"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4","Type":"ContainerStarted","Data":"2b2aa9bfcd67238b5d26239970c15d969ce6759e24872c7ddd8c0ce6dbc21340"} Dec 01 09:52:09 crc kubenswrapper[4933]: I1201 09:52:09.879478 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6fd8c5dc6c-czndt" event={"ID":"ea85bb6c-bf92-4f66-8068-8ccc7536bdb4","Type":"ContainerStarted","Data":"91d2f9b2abc75477a087cc1c6ebb0d0699cfe57d0a2ae78f4b24d69abec1e6ec"} Dec 01 09:52:09 crc kubenswrapper[4933]: I1201 09:52:09.925870 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-55788c59f6-zd5sp" podStartSLOduration=10.925836384 podStartE2EDuration="10.925836384s" podCreationTimestamp="2025-12-01 09:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:52:09.925193648 +0000 UTC m=+1220.566917283" watchObservedRunningTime="2025-12-01 09:52:09.925836384 +0000 UTC m=+1220.567559999" Dec 01 09:52:09 crc kubenswrapper[4933]: I1201 09:52:09.979814 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6fd8c5dc6c-czndt" podStartSLOduration=2.979781285 podStartE2EDuration="2.979781285s" podCreationTimestamp="2025-12-01 09:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:52:09.975154872 +0000 UTC m=+1220.616878497" watchObservedRunningTime="2025-12-01 09:52:09.979781285 +0000 UTC m=+1220.621504900" Dec 01 09:52:10 crc kubenswrapper[4933]: I1201 09:52:10.897629 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:10 crc kubenswrapper[4933]: I1201 09:52:10.898037 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:11 crc kubenswrapper[4933]: I1201 09:52:11.740874 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:52:11 crc kubenswrapper[4933]: I1201 09:52:11.741363 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:52:12 crc kubenswrapper[4933]: I1201 09:52:12.938012 4933 generic.go:334] "Generic (PLEG): container finished" podID="82f89d96-ceb5-4012-9273-68d00cb0780b" containerID="34b66823c89e2a4ba58921b9a111e53e35fdc1a2633c0cad1dca542f199791a0" exitCode=0 Dec 01 09:52:12 crc kubenswrapper[4933]: I1201 09:52:12.938088 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qw69v" event={"ID":"82f89d96-ceb5-4012-9273-68d00cb0780b","Type":"ContainerDied","Data":"34b66823c89e2a4ba58921b9a111e53e35fdc1a2633c0cad1dca542f199791a0"} Dec 01 09:52:19 crc kubenswrapper[4933]: I1201 09:52:19.060553 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6775f97bdb-vs7m8" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 01 09:52:19 crc kubenswrapper[4933]: I1201 09:52:19.137181 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75479c6864-2fvz5" podUID="000656f6-99fd-43a3-8ade-31b200d0c18a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Dec 01 09:52:20 crc kubenswrapper[4933]: I1201 09:52:20.375665 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qw69v" Dec 01 09:52:20 crc kubenswrapper[4933]: I1201 09:52:20.492276 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss9td\" (UniqueName: \"kubernetes.io/projected/82f89d96-ceb5-4012-9273-68d00cb0780b-kube-api-access-ss9td\") pod \"82f89d96-ceb5-4012-9273-68d00cb0780b\" (UID: \"82f89d96-ceb5-4012-9273-68d00cb0780b\") " Dec 01 09:52:20 crc kubenswrapper[4933]: I1201 09:52:20.492843 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f89d96-ceb5-4012-9273-68d00cb0780b-combined-ca-bundle\") pod \"82f89d96-ceb5-4012-9273-68d00cb0780b\" (UID: \"82f89d96-ceb5-4012-9273-68d00cb0780b\") " Dec 01 09:52:20 crc kubenswrapper[4933]: I1201 09:52:20.493057 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82f89d96-ceb5-4012-9273-68d00cb0780b-db-sync-config-data\") pod \"82f89d96-ceb5-4012-9273-68d00cb0780b\" (UID: \"82f89d96-ceb5-4012-9273-68d00cb0780b\") " Dec 01 09:52:20 crc kubenswrapper[4933]: I1201 09:52:20.498109 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82f89d96-ceb5-4012-9273-68d00cb0780b-kube-api-access-ss9td" (OuterVolumeSpecName: "kube-api-access-ss9td") pod "82f89d96-ceb5-4012-9273-68d00cb0780b" (UID: "82f89d96-ceb5-4012-9273-68d00cb0780b"). InnerVolumeSpecName "kube-api-access-ss9td". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:52:20 crc kubenswrapper[4933]: I1201 09:52:20.498602 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82f89d96-ceb5-4012-9273-68d00cb0780b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "82f89d96-ceb5-4012-9273-68d00cb0780b" (UID: "82f89d96-ceb5-4012-9273-68d00cb0780b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:20 crc kubenswrapper[4933]: I1201 09:52:20.520966 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82f89d96-ceb5-4012-9273-68d00cb0780b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82f89d96-ceb5-4012-9273-68d00cb0780b" (UID: "82f89d96-ceb5-4012-9273-68d00cb0780b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:20 crc kubenswrapper[4933]: I1201 09:52:20.595547 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss9td\" (UniqueName: \"kubernetes.io/projected/82f89d96-ceb5-4012-9273-68d00cb0780b-kube-api-access-ss9td\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:20 crc kubenswrapper[4933]: I1201 09:52:20.595588 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f89d96-ceb5-4012-9273-68d00cb0780b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:20 crc kubenswrapper[4933]: I1201 09:52:20.595600 4933 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82f89d96-ceb5-4012-9273-68d00cb0780b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:20 crc kubenswrapper[4933]: E1201 09:52:20.596022 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" Dec 01 09:52:20 crc kubenswrapper[4933]: I1201 09:52:20.705263 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.032648 4933 generic.go:334] "Generic (PLEG): container finished" podID="4d3e5dc3-470a-4fa0-b17c-733457329c79" containerID="3d4c2bbba9101c46a70cff7bd598fcdd9e716f2433c63dc7330090d3f2eaa09c" exitCode=0 Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.032738 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n76m8" event={"ID":"4d3e5dc3-470a-4fa0-b17c-733457329c79","Type":"ContainerDied","Data":"3d4c2bbba9101c46a70cff7bd598fcdd9e716f2433c63dc7330090d3f2eaa09c"} Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.036472 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qw69v" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.036512 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qw69v" event={"ID":"82f89d96-ceb5-4012-9273-68d00cb0780b","Type":"ContainerDied","Data":"fabc982509c61e3344eef97e45289e067122877c103cfdd7da8a511c97fdd3b7"} Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.036549 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fabc982509c61e3344eef97e45289e067122877c103cfdd7da8a511c97fdd3b7" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.040957 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea","Type":"ContainerStarted","Data":"98c0b8143d6bc0f65ca3b7bb8c7fe333ca92f9d8d60ee165d9685b4ab34378b6"} Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.041156 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerName="ceilometer-notification-agent" containerID="cri-o://439c574e9494f6a21f125d27c449e302b3eb1f97678a19d6e808352986326108" gracePeriod=30 Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.041497 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.041545 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerName="proxy-httpd" containerID="cri-o://98c0b8143d6bc0f65ca3b7bb8c7fe333ca92f9d8d60ee165d9685b4ab34378b6" gracePeriod=30 Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.041589 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerName="sg-core" containerID="cri-o://c98ca899829524f929e6a637495ae92006b1826cc73e11429a9284832e622a2d" gracePeriod=30 Dec 01 09:52:21 crc kubenswrapper[4933]: E1201 09:52:21.078004 4933 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82f89d96_ceb5_4012_9273_68d00cb0780b.slice\": RecentStats: unable to find data in memory cache]" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.686359 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6dd45957c5-5f9ff"] Dec 01 09:52:21 crc kubenswrapper[4933]: E1201 09:52:21.687208 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82f89d96-ceb5-4012-9273-68d00cb0780b" containerName="barbican-db-sync" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.687223 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="82f89d96-ceb5-4012-9273-68d00cb0780b" containerName="barbican-db-sync" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.687423 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="82f89d96-ceb5-4012-9273-68d00cb0780b" containerName="barbican-db-sync" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.688544 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.692357 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.692678 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-s2wkp" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.692868 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.709071 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6dd45957c5-5f9ff"] Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.782006 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5b85f87c74-hvnkk"] Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.784405 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.791057 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.806321 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5b85f87c74-hvnkk"] Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.825448 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19520328-8d8b-4f49-8c93-82cdfb3623c4-config-data-custom\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.825523 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19520328-8d8b-4f49-8c93-82cdfb3623c4-config-data\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.825633 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19520328-8d8b-4f49-8c93-82cdfb3623c4-logs\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.825680 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z87xv\" (UniqueName: \"kubernetes.io/projected/19520328-8d8b-4f49-8c93-82cdfb3623c4-kube-api-access-z87xv\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.825738 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19520328-8d8b-4f49-8c93-82cdfb3623c4-combined-ca-bundle\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.896099 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-j7hpq"] Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.898603 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.912715 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-j7hpq"] Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.927476 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/036e08a4-0b6f-498f-a851-723b07c2f687-logs\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.927567 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z87xv\" (UniqueName: \"kubernetes.io/projected/19520328-8d8b-4f49-8c93-82cdfb3623c4-kube-api-access-z87xv\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.927617 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19520328-8d8b-4f49-8c93-82cdfb3623c4-combined-ca-bundle\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.927683 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/036e08a4-0b6f-498f-a851-723b07c2f687-config-data-custom\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.927714 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19520328-8d8b-4f49-8c93-82cdfb3623c4-config-data-custom\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.927740 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsf57\" (UniqueName: \"kubernetes.io/projected/036e08a4-0b6f-498f-a851-723b07c2f687-kube-api-access-rsf57\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.927922 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/036e08a4-0b6f-498f-a851-723b07c2f687-combined-ca-bundle\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.927968 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19520328-8d8b-4f49-8c93-82cdfb3623c4-config-data\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.928088 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/036e08a4-0b6f-498f-a851-723b07c2f687-config-data\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.928247 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19520328-8d8b-4f49-8c93-82cdfb3623c4-logs\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.931036 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19520328-8d8b-4f49-8c93-82cdfb3623c4-logs\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.937741 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19520328-8d8b-4f49-8c93-82cdfb3623c4-combined-ca-bundle\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.954649 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19520328-8d8b-4f49-8c93-82cdfb3623c4-config-data-custom\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.957349 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19520328-8d8b-4f49-8c93-82cdfb3623c4-config-data\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:21 crc kubenswrapper[4933]: I1201 09:52:21.966058 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z87xv\" (UniqueName: \"kubernetes.io/projected/19520328-8d8b-4f49-8c93-82cdfb3623c4-kube-api-access-z87xv\") pod \"barbican-worker-6dd45957c5-5f9ff\" (UID: \"19520328-8d8b-4f49-8c93-82cdfb3623c4\") " pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.015162 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6dd45957c5-5f9ff" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.035993 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-dns-svc\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.036142 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q5ts\" (UniqueName: \"kubernetes.io/projected/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-kube-api-access-7q5ts\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.036181 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/036e08a4-0b6f-498f-a851-723b07c2f687-config-data-custom\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.036202 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-config\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.036248 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.036268 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsf57\" (UniqueName: \"kubernetes.io/projected/036e08a4-0b6f-498f-a851-723b07c2f687-kube-api-access-rsf57\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.036340 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/036e08a4-0b6f-498f-a851-723b07c2f687-combined-ca-bundle\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.036445 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/036e08a4-0b6f-498f-a851-723b07c2f687-config-data\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.036501 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.036678 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.036797 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/036e08a4-0b6f-498f-a851-723b07c2f687-logs\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.037594 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/036e08a4-0b6f-498f-a851-723b07c2f687-logs\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.046082 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/036e08a4-0b6f-498f-a851-723b07c2f687-combined-ca-bundle\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.049453 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/036e08a4-0b6f-498f-a851-723b07c2f687-config-data-custom\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.055108 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/036e08a4-0b6f-498f-a851-723b07c2f687-config-data\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.076858 4933 generic.go:334] "Generic (PLEG): container finished" podID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerID="98c0b8143d6bc0f65ca3b7bb8c7fe333ca92f9d8d60ee165d9685b4ab34378b6" exitCode=0 Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.076903 4933 generic.go:334] "Generic (PLEG): container finished" podID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerID="c98ca899829524f929e6a637495ae92006b1826cc73e11429a9284832e622a2d" exitCode=2 Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.076916 4933 generic.go:334] "Generic (PLEG): container finished" podID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerID="439c574e9494f6a21f125d27c449e302b3eb1f97678a19d6e808352986326108" exitCode=0 Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.077129 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea","Type":"ContainerDied","Data":"98c0b8143d6bc0f65ca3b7bb8c7fe333ca92f9d8d60ee165d9685b4ab34378b6"} Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.077169 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea","Type":"ContainerDied","Data":"c98ca899829524f929e6a637495ae92006b1826cc73e11429a9284832e622a2d"} Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.077186 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea","Type":"ContainerDied","Data":"439c574e9494f6a21f125d27c449e302b3eb1f97678a19d6e808352986326108"} Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.130786 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-587d6fd9d4-mtz4n"] Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.132776 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.139987 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.140098 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.140230 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-dns-svc\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.141867 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.141879 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.142819 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-dns-svc\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.140300 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q5ts\" (UniqueName: \"kubernetes.io/projected/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-kube-api-access-7q5ts\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.159565 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-config\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.159688 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.161175 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.161646 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-config\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.162073 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.171140 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsf57\" (UniqueName: \"kubernetes.io/projected/036e08a4-0b6f-498f-a851-723b07c2f687-kube-api-access-rsf57\") pod \"barbican-keystone-listener-5b85f87c74-hvnkk\" (UID: \"036e08a4-0b6f-498f-a851-723b07c2f687\") " pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.245735 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-587d6fd9d4-mtz4n"] Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.254431 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q5ts\" (UniqueName: \"kubernetes.io/projected/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-kube-api-access-7q5ts\") pod \"dnsmasq-dns-85ff748b95-j7hpq\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.303126 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-config-data-custom\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.303209 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-combined-ca-bundle\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.303468 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-config-data\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.303565 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckhql\" (UniqueName: \"kubernetes.io/projected/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-kube-api-access-ckhql\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.303688 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-logs\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.406927 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.428046 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-config-data-custom\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.428119 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-combined-ca-bundle\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.428228 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-config-data\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.428264 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckhql\" (UniqueName: \"kubernetes.io/projected/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-kube-api-access-ckhql\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.428412 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-logs\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.429043 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-logs\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.478625 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.482586 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-combined-ca-bundle\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.486741 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-config-data-custom\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.500862 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-config-data\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.569147 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckhql\" (UniqueName: \"kubernetes.io/projected/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-kube-api-access-ckhql\") pod \"barbican-api-587d6fd9d4-mtz4n\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.677888 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.775951 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-run-httpd\") pod \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.776414 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-scripts\") pod \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.776489 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s6wm\" (UniqueName: \"kubernetes.io/projected/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-kube-api-access-7s6wm\") pod \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.776658 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-combined-ca-bundle\") pod \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.776789 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-log-httpd\") pod \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.777023 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-sg-core-conf-yaml\") pod \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.777118 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-config-data\") pod \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\" (UID: \"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea\") " Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.777284 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" (UID: "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.778072 4933 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.780740 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" (UID: "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.788967 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-scripts" (OuterVolumeSpecName: "scripts") pod "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" (UID: "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.789433 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-kube-api-access-7s6wm" (OuterVolumeSpecName: "kube-api-access-7s6wm") pod "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" (UID: "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea"). InnerVolumeSpecName "kube-api-access-7s6wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.827603 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" (UID: "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.879372 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.881811 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s6wm\" (UniqueName: \"kubernetes.io/projected/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-kube-api-access-7s6wm\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.883852 4933 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.883868 4933 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.883879 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.930495 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" (UID: "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.979045 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-config-data" (OuterVolumeSpecName: "config-data") pod "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" (UID: "45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.986187 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n76m8" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.988112 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:22 crc kubenswrapper[4933]: I1201 09:52:22.988131 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.089344 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d3e5dc3-470a-4fa0-b17c-733457329c79-etc-machine-id\") pod \"4d3e5dc3-470a-4fa0-b17c-733457329c79\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.089425 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-combined-ca-bundle\") pod \"4d3e5dc3-470a-4fa0-b17c-733457329c79\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.089472 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q29tl\" (UniqueName: \"kubernetes.io/projected/4d3e5dc3-470a-4fa0-b17c-733457329c79-kube-api-access-q29tl\") pod \"4d3e5dc3-470a-4fa0-b17c-733457329c79\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.089704 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d3e5dc3-470a-4fa0-b17c-733457329c79-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4d3e5dc3-470a-4fa0-b17c-733457329c79" (UID: "4d3e5dc3-470a-4fa0-b17c-733457329c79"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.089822 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-db-sync-config-data\") pod \"4d3e5dc3-470a-4fa0-b17c-733457329c79\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.089908 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-scripts\") pod \"4d3e5dc3-470a-4fa0-b17c-733457329c79\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.089976 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-config-data\") pod \"4d3e5dc3-470a-4fa0-b17c-733457329c79\" (UID: \"4d3e5dc3-470a-4fa0-b17c-733457329c79\") " Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.091088 4933 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d3e5dc3-470a-4fa0-b17c-733457329c79-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.098889 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4d3e5dc3-470a-4fa0-b17c-733457329c79" (UID: "4d3e5dc3-470a-4fa0-b17c-733457329c79"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.102038 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-scripts" (OuterVolumeSpecName: "scripts") pod "4d3e5dc3-470a-4fa0-b17c-733457329c79" (UID: "4d3e5dc3-470a-4fa0-b17c-733457329c79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.104775 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d3e5dc3-470a-4fa0-b17c-733457329c79-kube-api-access-q29tl" (OuterVolumeSpecName: "kube-api-access-q29tl") pod "4d3e5dc3-470a-4fa0-b17c-733457329c79" (UID: "4d3e5dc3-470a-4fa0-b17c-733457329c79"). InnerVolumeSpecName "kube-api-access-q29tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.111144 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n76m8" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.111885 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n76m8" event={"ID":"4d3e5dc3-470a-4fa0-b17c-733457329c79","Type":"ContainerDied","Data":"701f41f037a39f412ce2742001aaa5c32cf9ab2bb1782f076f09928484ef7efc"} Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.112009 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="701f41f037a39f412ce2742001aaa5c32cf9ab2bb1782f076f09928484ef7efc" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.125122 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea","Type":"ContainerDied","Data":"784e1df92a8f0191e3ee36e44709e30a2432852f1e7580e576cbfe7cb94d3d37"} Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.125199 4933 scope.go:117] "RemoveContainer" containerID="98c0b8143d6bc0f65ca3b7bb8c7fe333ca92f9d8d60ee165d9685b4ab34378b6" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.125247 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.182710 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d3e5dc3-470a-4fa0-b17c-733457329c79" (UID: "4d3e5dc3-470a-4fa0-b17c-733457329c79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.195778 4933 scope.go:117] "RemoveContainer" containerID="c98ca899829524f929e6a637495ae92006b1826cc73e11429a9284832e622a2d" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.196732 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.196770 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q29tl\" (UniqueName: \"kubernetes.io/projected/4d3e5dc3-470a-4fa0-b17c-733457329c79-kube-api-access-q29tl\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.196784 4933 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.196796 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.218935 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.220299 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-config-data" (OuterVolumeSpecName: "config-data") pod "4d3e5dc3-470a-4fa0-b17c-733457329c79" (UID: "4d3e5dc3-470a-4fa0-b17c-733457329c79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.247417 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.277707 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6dd45957c5-5f9ff"] Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.283460 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-j7hpq"] Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.289076 4933 scope.go:117] "RemoveContainer" containerID="439c574e9494f6a21f125d27c449e302b3eb1f97678a19d6e808352986326108" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.300885 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3e5dc3-470a-4fa0-b17c-733457329c79-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.305693 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:52:23 crc kubenswrapper[4933]: E1201 09:52:23.306258 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerName="sg-core" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.306272 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerName="sg-core" Dec 01 09:52:23 crc kubenswrapper[4933]: E1201 09:52:23.306288 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3e5dc3-470a-4fa0-b17c-733457329c79" containerName="cinder-db-sync" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.306295 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3e5dc3-470a-4fa0-b17c-733457329c79" containerName="cinder-db-sync" Dec 01 09:52:23 crc kubenswrapper[4933]: E1201 09:52:23.306327 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerName="ceilometer-notification-agent" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.306334 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerName="ceilometer-notification-agent" Dec 01 09:52:23 crc kubenswrapper[4933]: E1201 09:52:23.306342 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerName="proxy-httpd" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.306348 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerName="proxy-httpd" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.306559 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d3e5dc3-470a-4fa0-b17c-733457329c79" containerName="cinder-db-sync" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.306571 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerName="sg-core" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.306582 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerName="proxy-httpd" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.306592 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" containerName="ceilometer-notification-agent" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.308555 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.311861 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.312119 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.337546 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:52:23 crc kubenswrapper[4933]: W1201 09:52:23.351660 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19520328_8d8b_4f49_8c93_82cdfb3623c4.slice/crio-3dc69f5827fbd401ff2cf2f71fc1beab264110d9d3c3e518238543633b242253 WatchSource:0}: Error finding container 3dc69f5827fbd401ff2cf2f71fc1beab264110d9d3c3e518238543633b242253: Status 404 returned error can't find the container with id 3dc69f5827fbd401ff2cf2f71fc1beab264110d9d3c3e518238543633b242253 Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.403660 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-scripts\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.403809 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw297\" (UniqueName: \"kubernetes.io/projected/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-kube-api-access-fw297\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.403916 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-config-data\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.403953 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-log-httpd\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.404049 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-run-httpd\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.404075 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.404134 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.416356 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-587d6fd9d4-mtz4n"] Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.451094 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5b85f87c74-hvnkk"] Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.506498 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-scripts\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.506629 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw297\" (UniqueName: \"kubernetes.io/projected/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-kube-api-access-fw297\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.506713 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-config-data\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.506743 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-log-httpd\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.506817 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-run-httpd\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.506845 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.506887 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.507776 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-log-httpd\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.507911 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-run-httpd\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.513469 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-config-data\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.514355 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.514602 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-scripts\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.539175 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.543568 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw297\" (UniqueName: \"kubernetes.io/projected/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-kube-api-access-fw297\") pod \"ceilometer-0\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.589555 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.621045 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.626933 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.634866 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.635122 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.637272 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.637498 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9964x" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.655875 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.714073 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-scripts\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.714666 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.714770 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.714978 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-config-data\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.715007 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwq68\" (UniqueName: \"kubernetes.io/projected/e653c565-dc91-44a0-956f-ca5e840b47e6-kube-api-access-gwq68\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.715089 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e653c565-dc91-44a0-956f-ca5e840b47e6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.777026 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea" path="/var/lib/kubelet/pods/45a1fb8d-e0ee-4470-aff1-b61e1b3dfdea/volumes" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.788823 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-j7hpq"] Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.825039 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-config-data\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.825111 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwq68\" (UniqueName: \"kubernetes.io/projected/e653c565-dc91-44a0-956f-ca5e840b47e6-kube-api-access-gwq68\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.825173 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e653c565-dc91-44a0-956f-ca5e840b47e6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.825354 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-scripts\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.825439 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.825479 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e653c565-dc91-44a0-956f-ca5e840b47e6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.825497 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.834427 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9wb2z"] Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.836971 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.843328 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.852529 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-scripts\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.858907 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.861751 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-config-data\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.894041 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9wb2z"] Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.909243 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwq68\" (UniqueName: \"kubernetes.io/projected/e653c565-dc91-44a0-956f-ca5e840b47e6-kube-api-access-gwq68\") pod \"cinder-scheduler-0\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.938816 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-config\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.939460 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bnpq\" (UniqueName: \"kubernetes.io/projected/3a28f786-8a38-4e52-ba6c-21550508ca03-kube-api-access-2bnpq\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.939503 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.939602 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.939703 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.939841 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:23 crc kubenswrapper[4933]: I1201 09:52:23.990039 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.041618 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.041788 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.041874 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.041959 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-config\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.042018 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bnpq\" (UniqueName: \"kubernetes.io/projected/3a28f786-8a38-4e52-ba6c-21550508ca03-kube-api-access-2bnpq\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.042053 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.049883 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-config\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.049945 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.050660 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.051430 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.052998 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.057993 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.077103 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.082644 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bnpq\" (UniqueName: \"kubernetes.io/projected/3a28f786-8a38-4e52-ba6c-21550508ca03-kube-api-access-2bnpq\") pod \"dnsmasq-dns-5c9776ccc5-9wb2z\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.090490 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.097922 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5dd758bcf-r4prx" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.137470 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.154746 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6155aa32-6379-46de-8488-d7cc09eac1f7-logs\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.154920 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-scripts\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.154973 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6155aa32-6379-46de-8488-d7cc09eac1f7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.155020 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.155106 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-config-data-custom\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.155144 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-config-data\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.156456 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbk72\" (UniqueName: \"kubernetes.io/projected/6155aa32-6379-46de-8488-d7cc09eac1f7-kube-api-access-jbk72\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.181405 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.302277 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6155aa32-6379-46de-8488-d7cc09eac1f7-logs\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.302482 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-scripts\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.302530 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6155aa32-6379-46de-8488-d7cc09eac1f7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.302570 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.302673 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-config-data-custom\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.302700 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-config-data\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.302736 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbk72\" (UniqueName: \"kubernetes.io/projected/6155aa32-6379-46de-8488-d7cc09eac1f7-kube-api-access-jbk72\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.308937 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" event={"ID":"036e08a4-0b6f-498f-a851-723b07c2f687","Type":"ContainerStarted","Data":"2597b7ef5a54442424ff0725d86eba275658b447580df3be9637b91f47a7f6fa"} Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.317492 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.319696 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6155aa32-6379-46de-8488-d7cc09eac1f7-logs\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.329673 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6155aa32-6379-46de-8488-d7cc09eac1f7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.355863 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-config-data-custom\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.358427 4933 generic.go:334] "Generic (PLEG): container finished" podID="5db16dd0-ff44-4389-af0c-934ca6a8cf4e" containerID="99f40a6ee75597adad7b98717363d9b62e3ecd901d04bedd1b2d0feb02bb0b77" exitCode=0 Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.358563 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" event={"ID":"5db16dd0-ff44-4389-af0c-934ca6a8cf4e","Type":"ContainerDied","Data":"99f40a6ee75597adad7b98717363d9b62e3ecd901d04bedd1b2d0feb02bb0b77"} Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.358636 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" event={"ID":"5db16dd0-ff44-4389-af0c-934ca6a8cf4e","Type":"ContainerStarted","Data":"7742c5ad030ef5263eac56fbc7414480a52950f54575e9c12330271349ed74dd"} Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.358465 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbk72\" (UniqueName: \"kubernetes.io/projected/6155aa32-6379-46de-8488-d7cc09eac1f7-kube-api-access-jbk72\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.365251 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-config-data\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.421821 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-scripts\") pod \"cinder-api-0\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.445125 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6dd45957c5-5f9ff" event={"ID":"19520328-8d8b-4f49-8c93-82cdfb3623c4","Type":"ContainerStarted","Data":"3dc69f5827fbd401ff2cf2f71fc1beab264110d9d3c3e518238543633b242253"} Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.453715 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-587d6fd9d4-mtz4n" event={"ID":"db6a88f7-d4dd-4a87-88f5-499bb9f5d377","Type":"ContainerStarted","Data":"f0cf57c1e8184f0bc31dc2e70a2650d0fc6c2e34cf642fe12952dd0d92b803bc"} Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.453771 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-587d6fd9d4-mtz4n" event={"ID":"db6a88f7-d4dd-4a87-88f5-499bb9f5d377","Type":"ContainerStarted","Data":"7b99dbda9bb50af65804a7c5e6b4d0d151c5d054154907357c78df61e1ad7434"} Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.459830 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.526727 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66cbb488fb-wvfw6"] Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.527652 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66cbb488fb-wvfw6" podUID="ecffe4c0-9de1-418c-b57e-d0484fb89482" containerName="neutron-api" containerID="cri-o://b39c4bcf1ee7c55db12b2b7f455dc62fc933e108cf52cde8ca2d93d667fde979" gracePeriod=30 Dec 01 09:52:24 crc kubenswrapper[4933]: I1201 09:52:24.528917 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66cbb488fb-wvfw6" podUID="ecffe4c0-9de1-418c-b57e-d0484fb89482" containerName="neutron-httpd" containerID="cri-o://ad68bf3884fdf1eba62ef989dcb28890bf939cf689eba4b31551a91adbb028e9" gracePeriod=30 Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.007498 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.344147 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9wb2z"] Dec 01 09:52:25 crc kubenswrapper[4933]: W1201 09:52:25.376550 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a28f786_8a38_4e52_ba6c_21550508ca03.slice/crio-9340c9e748ac0c0f36600deb17f3f4e55083e6a98905d5c49e02969165924e87 WatchSource:0}: Error finding container 9340c9e748ac0c0f36600deb17f3f4e55083e6a98905d5c49e02969165924e87: Status 404 returned error can't find the container with id 9340c9e748ac0c0f36600deb17f3f4e55083e6a98905d5c49e02969165924e87 Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.394722 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.501138 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" event={"ID":"5db16dd0-ff44-4389-af0c-934ca6a8cf4e","Type":"ContainerDied","Data":"7742c5ad030ef5263eac56fbc7414480a52950f54575e9c12330271349ed74dd"} Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.501609 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7742c5ad030ef5263eac56fbc7414480a52950f54575e9c12330271349ed74dd" Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.514444 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"694ae277-c7eb-4d3c-8125-b5fb7fa45b71","Type":"ContainerStarted","Data":"216c0689e2064419bf5f0d5e413dd3b7a55d1be63630efa549fc71cc11c2557d"} Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.530740 4933 generic.go:334] "Generic (PLEG): container finished" podID="ecffe4c0-9de1-418c-b57e-d0484fb89482" containerID="ad68bf3884fdf1eba62ef989dcb28890bf939cf689eba4b31551a91adbb028e9" exitCode=0 Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.530819 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cbb488fb-wvfw6" event={"ID":"ecffe4c0-9de1-418c-b57e-d0484fb89482","Type":"ContainerDied","Data":"ad68bf3884fdf1eba62ef989dcb28890bf939cf689eba4b31551a91adbb028e9"} Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.533180 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e653c565-dc91-44a0-956f-ca5e840b47e6","Type":"ContainerStarted","Data":"27a5c9b23330dd43ea3dda8547b01d3b42203080cbc5490db1e3d41f853b86e9"} Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.536258 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-587d6fd9d4-mtz4n" event={"ID":"db6a88f7-d4dd-4a87-88f5-499bb9f5d377","Type":"ContainerStarted","Data":"1a2890845011c5bb2a175efb25a11484ad26affcf86f62d84312cfbefb673da9"} Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.536317 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.536348 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.538742 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" event={"ID":"3a28f786-8a38-4e52-ba6c-21550508ca03","Type":"ContainerStarted","Data":"9340c9e748ac0c0f36600deb17f3f4e55083e6a98905d5c49e02969165924e87"} Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.677814 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.682734 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-587d6fd9d4-mtz4n" podStartSLOduration=3.682704029 podStartE2EDuration="3.682704029s" podCreationTimestamp="2025-12-01 09:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:52:25.661030299 +0000 UTC m=+1236.302753924" watchObservedRunningTime="2025-12-01 09:52:25.682704029 +0000 UTC m=+1236.324427654" Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.732684 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.864402 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q5ts\" (UniqueName: \"kubernetes.io/projected/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-kube-api-access-7q5ts\") pod \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.864574 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-dns-svc\") pod \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.864750 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-dns-swift-storage-0\") pod \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.864791 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-ovsdbserver-sb\") pod \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.864832 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-config\") pod \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.864873 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-ovsdbserver-nb\") pod \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\" (UID: \"5db16dd0-ff44-4389-af0c-934ca6a8cf4e\") " Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.901479 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-kube-api-access-7q5ts" (OuterVolumeSpecName: "kube-api-access-7q5ts") pod "5db16dd0-ff44-4389-af0c-934ca6a8cf4e" (UID: "5db16dd0-ff44-4389-af0c-934ca6a8cf4e"). InnerVolumeSpecName "kube-api-access-7q5ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.969368 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q5ts\" (UniqueName: \"kubernetes.io/projected/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-kube-api-access-7q5ts\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:25 crc kubenswrapper[4933]: I1201 09:52:25.984925 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-config" (OuterVolumeSpecName: "config") pod "5db16dd0-ff44-4389-af0c-934ca6a8cf4e" (UID: "5db16dd0-ff44-4389-af0c-934ca6a8cf4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.051028 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5db16dd0-ff44-4389-af0c-934ca6a8cf4e" (UID: "5db16dd0-ff44-4389-af0c-934ca6a8cf4e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.070829 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5db16dd0-ff44-4389-af0c-934ca6a8cf4e" (UID: "5db16dd0-ff44-4389-af0c-934ca6a8cf4e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.085389 4933 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.085440 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.085452 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.099656 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5db16dd0-ff44-4389-af0c-934ca6a8cf4e" (UID: "5db16dd0-ff44-4389-af0c-934ca6a8cf4e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.117251 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5db16dd0-ff44-4389-af0c-934ca6a8cf4e" (UID: "5db16dd0-ff44-4389-af0c-934ca6a8cf4e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.188546 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.188607 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5db16dd0-ff44-4389-af0c-934ca6a8cf4e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.558048 4933 generic.go:334] "Generic (PLEG): container finished" podID="3a28f786-8a38-4e52-ba6c-21550508ca03" containerID="01e2a7dfb407ac46ccec394872ecd3dd9194e7eeb9fd10dd801898d2e29d2b0c" exitCode=0 Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.558156 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" event={"ID":"3a28f786-8a38-4e52-ba6c-21550508ca03","Type":"ContainerDied","Data":"01e2a7dfb407ac46ccec394872ecd3dd9194e7eeb9fd10dd801898d2e29d2b0c"} Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.562172 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-j7hpq" Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.562191 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6155aa32-6379-46de-8488-d7cc09eac1f7","Type":"ContainerStarted","Data":"7a21c3bcd0d643735cc404294017b0e92f0a88ee9c0c0be95cc65fe0a39bdf8e"} Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.726039 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-j7hpq"] Dec 01 09:52:26 crc kubenswrapper[4933]: I1201 09:52:26.739236 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-j7hpq"] Dec 01 09:52:27 crc kubenswrapper[4933]: I1201 09:52:27.582803 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6155aa32-6379-46de-8488-d7cc09eac1f7","Type":"ContainerStarted","Data":"cc4a6512e734de66b8f8f12ea39dbca28857a11aa0500977c0821115a94e37cc"} Dec 01 09:52:27 crc kubenswrapper[4933]: I1201 09:52:27.591784 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"694ae277-c7eb-4d3c-8125-b5fb7fa45b71","Type":"ContainerStarted","Data":"68a0edf398f24c91dfb2d4c7654df6ef970fb66c7eab8b6a360a095b264f4dc1"} Dec 01 09:52:27 crc kubenswrapper[4933]: I1201 09:52:27.693403 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db16dd0-ff44-4389-af0c-934ca6a8cf4e" path="/var/lib/kubelet/pods/5db16dd0-ff44-4389-af0c-934ca6a8cf4e/volumes" Dec 01 09:52:27 crc kubenswrapper[4933]: I1201 09:52:27.932248 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 01 09:52:28 crc kubenswrapper[4933]: I1201 09:52:28.672402 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" event={"ID":"3a28f786-8a38-4e52-ba6c-21550508ca03","Type":"ContainerStarted","Data":"e9c1eeae23045db9018521a1bf647918ded90ff9a9aa98fa3c10fff132f416b7"} Dec 01 09:52:28 crc kubenswrapper[4933]: I1201 09:52:28.673108 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:28 crc kubenswrapper[4933]: I1201 09:52:28.706242 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" podStartSLOduration=5.7062239219999995 podStartE2EDuration="5.706223922s" podCreationTimestamp="2025-12-01 09:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:52:28.704025978 +0000 UTC m=+1239.345749593" watchObservedRunningTime="2025-12-01 09:52:28.706223922 +0000 UTC m=+1239.347947537" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.582429 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-76fbd6d8c8-kdqkn"] Dec 01 09:52:29 crc kubenswrapper[4933]: E1201 09:52:29.592430 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db16dd0-ff44-4389-af0c-934ca6a8cf4e" containerName="init" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.592635 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db16dd0-ff44-4389-af0c-934ca6a8cf4e" containerName="init" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.592911 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db16dd0-ff44-4389-af0c-934ca6a8cf4e" containerName="init" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.594089 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.608837 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.609150 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.703100 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-internal-tls-certs\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.703185 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/006eae88-9faa-428f-9d0c-a9fd104b7d06-logs\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.703223 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76p5w\" (UniqueName: \"kubernetes.io/projected/006eae88-9faa-428f-9d0c-a9fd104b7d06-kube-api-access-76p5w\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.703273 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-config-data\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.703323 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-config-data-custom\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.703353 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-combined-ca-bundle\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.703422 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-public-tls-certs\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.804010 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76fbd6d8c8-kdqkn"] Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.805130 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-config-data-custom\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.805162 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-combined-ca-bundle\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.805222 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-public-tls-certs\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.805258 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-internal-tls-certs\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.805290 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/006eae88-9faa-428f-9d0c-a9fd104b7d06-logs\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.806995 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76p5w\" (UniqueName: \"kubernetes.io/projected/006eae88-9faa-428f-9d0c-a9fd104b7d06-kube-api-access-76p5w\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.807056 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-config-data\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.809991 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/006eae88-9faa-428f-9d0c-a9fd104b7d06-logs\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.817502 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-public-tls-certs\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.822800 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6dd45957c5-5f9ff" event={"ID":"19520328-8d8b-4f49-8c93-82cdfb3623c4","Type":"ContainerStarted","Data":"dd07fb7ddb97fe6da53276d1e93e8e0a1cfadefabf7744c279204d895e77dfcb"} Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.823285 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-combined-ca-bundle\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.837962 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6dd45957c5-5f9ff" event={"ID":"19520328-8d8b-4f49-8c93-82cdfb3623c4","Type":"ContainerStarted","Data":"42e2dbb983f207674b1cf7bacb3d9d7853ca2bc80a7eafe4fdceb8a348a6f4cb"} Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.841427 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-config-data-custom\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.852950 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76p5w\" (UniqueName: \"kubernetes.io/projected/006eae88-9faa-428f-9d0c-a9fd104b7d06-kube-api-access-76p5w\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.859257 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-internal-tls-certs\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.866034 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/006eae88-9faa-428f-9d0c-a9fd104b7d06-config-data\") pod \"barbican-api-76fbd6d8c8-kdqkn\" (UID: \"006eae88-9faa-428f-9d0c-a9fd104b7d06\") " pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.890858 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e653c565-dc91-44a0-956f-ca5e840b47e6","Type":"ContainerStarted","Data":"d2dd4324b83c4bd4c8b595d1dcd2760987e00ef8dd28446061ec16eca290d1b2"} Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.959048 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" event={"ID":"036e08a4-0b6f-498f-a851-723b07c2f687","Type":"ContainerStarted","Data":"c700fb055f318977a274484445a8e0d14b20a82792281e0cd372bf6996df602b"} Dec 01 09:52:29 crc kubenswrapper[4933]: I1201 09:52:29.959128 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" event={"ID":"036e08a4-0b6f-498f-a851-723b07c2f687","Type":"ContainerStarted","Data":"0cef6e68ac31a56ccf397039879f3639d154c37aaad492c3db543634c91596fc"} Dec 01 09:52:30 crc kubenswrapper[4933]: I1201 09:52:30.075959 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:30 crc kubenswrapper[4933]: I1201 09:52:30.112763 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6155aa32-6379-46de-8488-d7cc09eac1f7","Type":"ContainerStarted","Data":"59cb96e3885d5517c00d6ea16b441be8a33a39afb246e69bb04860c977660b2c"} Dec 01 09:52:30 crc kubenswrapper[4933]: I1201 09:52:30.113090 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6155aa32-6379-46de-8488-d7cc09eac1f7" containerName="cinder-api-log" containerID="cri-o://cc4a6512e734de66b8f8f12ea39dbca28857a11aa0500977c0821115a94e37cc" gracePeriod=30 Dec 01 09:52:30 crc kubenswrapper[4933]: I1201 09:52:30.113467 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 01 09:52:30 crc kubenswrapper[4933]: I1201 09:52:30.113528 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6155aa32-6379-46de-8488-d7cc09eac1f7" containerName="cinder-api" containerID="cri-o://59cb96e3885d5517c00d6ea16b441be8a33a39afb246e69bb04860c977660b2c" gracePeriod=30 Dec 01 09:52:30 crc kubenswrapper[4933]: I1201 09:52:30.127888 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5b85f87c74-hvnkk" podStartSLOduration=4.5716401399999995 podStartE2EDuration="9.127856446s" podCreationTimestamp="2025-12-01 09:52:21 +0000 UTC" firstStartedPulling="2025-12-01 09:52:23.478655325 +0000 UTC m=+1234.120378950" lastFinishedPulling="2025-12-01 09:52:28.034871641 +0000 UTC m=+1238.676595256" observedRunningTime="2025-12-01 09:52:30.103232822 +0000 UTC m=+1240.744956437" watchObservedRunningTime="2025-12-01 09:52:30.127856446 +0000 UTC m=+1240.769580061" Dec 01 09:52:30 crc kubenswrapper[4933]: I1201 09:52:30.151399 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"694ae277-c7eb-4d3c-8125-b5fb7fa45b71","Type":"ContainerStarted","Data":"e755e38f445fdefd32cf04e17672fa97d04c6ce71929ad05233e802dcb8b2def"} Dec 01 09:52:30 crc kubenswrapper[4933]: I1201 09:52:30.162591 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6dd45957c5-5f9ff" podStartSLOduration=4.426604839 podStartE2EDuration="9.162553525s" podCreationTimestamp="2025-12-01 09:52:21 +0000 UTC" firstStartedPulling="2025-12-01 09:52:23.351053721 +0000 UTC m=+1233.992777336" lastFinishedPulling="2025-12-01 09:52:28.087002407 +0000 UTC m=+1238.728726022" observedRunningTime="2025-12-01 09:52:30.143902828 +0000 UTC m=+1240.785626443" watchObservedRunningTime="2025-12-01 09:52:30.162553525 +0000 UTC m=+1240.804277140" Dec 01 09:52:30 crc kubenswrapper[4933]: I1201 09:52:30.189522 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.189501385 podStartE2EDuration="7.189501385s" podCreationTimestamp="2025-12-01 09:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:52:30.175939703 +0000 UTC m=+1240.817663338" watchObservedRunningTime="2025-12-01 09:52:30.189501385 +0000 UTC m=+1240.831225000" Dec 01 09:52:30 crc kubenswrapper[4933]: I1201 09:52:30.973715 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76fbd6d8c8-kdqkn"] Dec 01 09:52:31 crc kubenswrapper[4933]: I1201 09:52:31.222764 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e653c565-dc91-44a0-956f-ca5e840b47e6","Type":"ContainerStarted","Data":"2e6363d795ea8046980eff7c981cf329b989ddce9819341a87bc1a41413fbdfc"} Dec 01 09:52:31 crc kubenswrapper[4933]: I1201 09:52:31.232099 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76fbd6d8c8-kdqkn" event={"ID":"006eae88-9faa-428f-9d0c-a9fd104b7d06","Type":"ContainerStarted","Data":"c650be333744b4be1a6d4f61be934578bc45137344a2f0834398c4502952aa59"} Dec 01 09:52:31 crc kubenswrapper[4933]: I1201 09:52:31.235739 4933 generic.go:334] "Generic (PLEG): container finished" podID="6155aa32-6379-46de-8488-d7cc09eac1f7" containerID="59cb96e3885d5517c00d6ea16b441be8a33a39afb246e69bb04860c977660b2c" exitCode=0 Dec 01 09:52:31 crc kubenswrapper[4933]: I1201 09:52:31.235782 4933 generic.go:334] "Generic (PLEG): container finished" podID="6155aa32-6379-46de-8488-d7cc09eac1f7" containerID="cc4a6512e734de66b8f8f12ea39dbca28857a11aa0500977c0821115a94e37cc" exitCode=143 Dec 01 09:52:31 crc kubenswrapper[4933]: I1201 09:52:31.235835 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6155aa32-6379-46de-8488-d7cc09eac1f7","Type":"ContainerDied","Data":"59cb96e3885d5517c00d6ea16b441be8a33a39afb246e69bb04860c977660b2c"} Dec 01 09:52:31 crc kubenswrapper[4933]: I1201 09:52:31.235869 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6155aa32-6379-46de-8488-d7cc09eac1f7","Type":"ContainerDied","Data":"cc4a6512e734de66b8f8f12ea39dbca28857a11aa0500977c0821115a94e37cc"} Dec 01 09:52:31 crc kubenswrapper[4933]: I1201 09:52:31.245411 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"694ae277-c7eb-4d3c-8125-b5fb7fa45b71","Type":"ContainerStarted","Data":"b0292fc49a23ce180dfbd438fac50a747201f9f1dc65e2c6deb7e0e71d0d6ce5"} Dec 01 09:52:31 crc kubenswrapper[4933]: I1201 09:52:31.317828 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.668726422 podStartE2EDuration="8.317788545s" podCreationTimestamp="2025-12-01 09:52:23 +0000 UTC" firstStartedPulling="2025-12-01 09:52:25.385796998 +0000 UTC m=+1236.027520613" lastFinishedPulling="2025-12-01 09:52:28.034859121 +0000 UTC m=+1238.676582736" observedRunningTime="2025-12-01 09:52:31.308848456 +0000 UTC m=+1241.950572081" watchObservedRunningTime="2025-12-01 09:52:31.317788545 +0000 UTC m=+1241.959512160" Dec 01 09:52:31 crc kubenswrapper[4933]: I1201 09:52:31.830680 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.001889 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6155aa32-6379-46de-8488-d7cc09eac1f7-etc-machine-id\") pod \"6155aa32-6379-46de-8488-d7cc09eac1f7\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.002020 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-combined-ca-bundle\") pod \"6155aa32-6379-46de-8488-d7cc09eac1f7\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.002040 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6155aa32-6379-46de-8488-d7cc09eac1f7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6155aa32-6379-46de-8488-d7cc09eac1f7" (UID: "6155aa32-6379-46de-8488-d7cc09eac1f7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.002072 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbk72\" (UniqueName: \"kubernetes.io/projected/6155aa32-6379-46de-8488-d7cc09eac1f7-kube-api-access-jbk72\") pod \"6155aa32-6379-46de-8488-d7cc09eac1f7\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.002247 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-config-data\") pod \"6155aa32-6379-46de-8488-d7cc09eac1f7\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.002507 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-config-data-custom\") pod \"6155aa32-6379-46de-8488-d7cc09eac1f7\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.002548 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-scripts\") pod \"6155aa32-6379-46de-8488-d7cc09eac1f7\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.002639 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6155aa32-6379-46de-8488-d7cc09eac1f7-logs\") pod \"6155aa32-6379-46de-8488-d7cc09eac1f7\" (UID: \"6155aa32-6379-46de-8488-d7cc09eac1f7\") " Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.003114 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6155aa32-6379-46de-8488-d7cc09eac1f7-logs" (OuterVolumeSpecName: "logs") pod "6155aa32-6379-46de-8488-d7cc09eac1f7" (UID: "6155aa32-6379-46de-8488-d7cc09eac1f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.003713 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6155aa32-6379-46de-8488-d7cc09eac1f7-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.003733 4933 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6155aa32-6379-46de-8488-d7cc09eac1f7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.017621 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6155aa32-6379-46de-8488-d7cc09eac1f7" (UID: "6155aa32-6379-46de-8488-d7cc09eac1f7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.023372 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6155aa32-6379-46de-8488-d7cc09eac1f7-kube-api-access-jbk72" (OuterVolumeSpecName: "kube-api-access-jbk72") pod "6155aa32-6379-46de-8488-d7cc09eac1f7" (UID: "6155aa32-6379-46de-8488-d7cc09eac1f7"). InnerVolumeSpecName "kube-api-access-jbk72". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.052403 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-scripts" (OuterVolumeSpecName: "scripts") pod "6155aa32-6379-46de-8488-d7cc09eac1f7" (UID: "6155aa32-6379-46de-8488-d7cc09eac1f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.103584 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6155aa32-6379-46de-8488-d7cc09eac1f7" (UID: "6155aa32-6379-46de-8488-d7cc09eac1f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.110190 4933 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.110234 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.110248 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.110265 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbk72\" (UniqueName: \"kubernetes.io/projected/6155aa32-6379-46de-8488-d7cc09eac1f7-kube-api-access-jbk72\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.130513 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-config-data" (OuterVolumeSpecName: "config-data") pod "6155aa32-6379-46de-8488-d7cc09eac1f7" (UID: "6155aa32-6379-46de-8488-d7cc09eac1f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.212335 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6155aa32-6379-46de-8488-d7cc09eac1f7-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.300721 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76fbd6d8c8-kdqkn" event={"ID":"006eae88-9faa-428f-9d0c-a9fd104b7d06","Type":"ContainerStarted","Data":"9f735bca8dee392c6b209338ac49ff16ad128ef10ec0f797ed2aaaf934faadab"} Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.363672 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.365214 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6155aa32-6379-46de-8488-d7cc09eac1f7","Type":"ContainerDied","Data":"7a21c3bcd0d643735cc404294017b0e92f0a88ee9c0c0be95cc65fe0a39bdf8e"} Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.365476 4933 scope.go:117] "RemoveContainer" containerID="59cb96e3885d5517c00d6ea16b441be8a33a39afb246e69bb04860c977660b2c" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.474610 4933 scope.go:117] "RemoveContainer" containerID="cc4a6512e734de66b8f8f12ea39dbca28857a11aa0500977c0821115a94e37cc" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.725651 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.739820 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.777363 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 01 09:52:32 crc kubenswrapper[4933]: E1201 09:52:32.778571 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6155aa32-6379-46de-8488-d7cc09eac1f7" containerName="cinder-api-log" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.778606 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="6155aa32-6379-46de-8488-d7cc09eac1f7" containerName="cinder-api-log" Dec 01 09:52:32 crc kubenswrapper[4933]: E1201 09:52:32.778634 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6155aa32-6379-46de-8488-d7cc09eac1f7" containerName="cinder-api" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.778650 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="6155aa32-6379-46de-8488-d7cc09eac1f7" containerName="cinder-api" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.778927 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="6155aa32-6379-46de-8488-d7cc09eac1f7" containerName="cinder-api-log" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.778958 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="6155aa32-6379-46de-8488-d7cc09eac1f7" containerName="cinder-api" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.780914 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.787694 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.788053 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.790444 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.835941 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.836147 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adfbde90-8055-49f0-9ccb-83d1502332cd-logs\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.836219 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.836263 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adfbde90-8055-49f0-9ccb-83d1502332cd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.836300 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-scripts\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.836384 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.836425 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-config-data-custom\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.836453 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-config-data\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.836503 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sjqz\" (UniqueName: \"kubernetes.io/projected/adfbde90-8055-49f0-9ccb-83d1502332cd-kube-api-access-4sjqz\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.854595 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.942372 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adfbde90-8055-49f0-9ccb-83d1502332cd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.942444 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adfbde90-8055-49f0-9ccb-83d1502332cd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.942492 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-scripts\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.942512 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.942574 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-config-data-custom\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.942613 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-config-data\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.942686 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sjqz\" (UniqueName: \"kubernetes.io/projected/adfbde90-8055-49f0-9ccb-83d1502332cd-kube-api-access-4sjqz\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.942832 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.943159 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adfbde90-8055-49f0-9ccb-83d1502332cd-logs\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.943268 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.952642 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-config-data\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.953637 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adfbde90-8055-49f0-9ccb-83d1502332cd-logs\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.959193 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.961169 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-scripts\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.978965 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.980049 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:32 crc kubenswrapper[4933]: I1201 09:52:32.989365 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adfbde90-8055-49f0-9ccb-83d1502332cd-config-data-custom\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:33 crc kubenswrapper[4933]: I1201 09:52:33.015153 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sjqz\" (UniqueName: \"kubernetes.io/projected/adfbde90-8055-49f0-9ccb-83d1502332cd-kube-api-access-4sjqz\") pod \"cinder-api-0\" (UID: \"adfbde90-8055-49f0-9ccb-83d1502332cd\") " pod="openstack/cinder-api-0" Dec 01 09:52:33 crc kubenswrapper[4933]: I1201 09:52:33.056197 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:33 crc kubenswrapper[4933]: I1201 09:52:33.138208 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 09:52:33 crc kubenswrapper[4933]: I1201 09:52:33.401460 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76fbd6d8c8-kdqkn" event={"ID":"006eae88-9faa-428f-9d0c-a9fd104b7d06","Type":"ContainerStarted","Data":"d268088f976e05b64a7ef9d9b827202d350ef3cc2440488373d7d2313f6a0f70"} Dec 01 09:52:33 crc kubenswrapper[4933]: I1201 09:52:33.402124 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:33 crc kubenswrapper[4933]: I1201 09:52:33.468986 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-76fbd6d8c8-kdqkn" podStartSLOduration=4.468931244 podStartE2EDuration="4.468931244s" podCreationTimestamp="2025-12-01 09:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:52:33.452285786 +0000 UTC m=+1244.094009391" watchObservedRunningTime="2025-12-01 09:52:33.468931244 +0000 UTC m=+1244.110654859" Dec 01 09:52:33 crc kubenswrapper[4933]: I1201 09:52:33.530604 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"694ae277-c7eb-4d3c-8125-b5fb7fa45b71","Type":"ContainerStarted","Data":"548e12572203ec08f31aeb72e7bf6312905452a77e95e1cb11d34c7b5a8c9661"} Dec 01 09:52:33 crc kubenswrapper[4933]: I1201 09:52:33.532497 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 09:52:33 crc kubenswrapper[4933]: I1201 09:52:33.588173 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.45654148 podStartE2EDuration="10.588153063s" podCreationTimestamp="2025-12-01 09:52:23 +0000 UTC" firstStartedPulling="2025-12-01 09:52:25.06653246 +0000 UTC m=+1235.708256075" lastFinishedPulling="2025-12-01 09:52:32.198144043 +0000 UTC m=+1242.839867658" observedRunningTime="2025-12-01 09:52:33.564529724 +0000 UTC m=+1244.206253349" watchObservedRunningTime="2025-12-01 09:52:33.588153063 +0000 UTC m=+1244.229876668" Dec 01 09:52:33 crc kubenswrapper[4933]: I1201 09:52:33.692017 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6155aa32-6379-46de-8488-d7cc09eac1f7" path="/var/lib/kubelet/pods/6155aa32-6379-46de-8488-d7cc09eac1f7/volumes" Dec 01 09:52:33 crc kubenswrapper[4933]: I1201 09:52:33.911675 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 09:52:33 crc kubenswrapper[4933]: I1201 09:52:33.993372 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.065537 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6775f97bdb-vs7m8" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.066141 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.067531 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"0874941382f63e0c82cae2a5dc94543aa96b15309c19c42f17c114109f943016"} pod="openstack/horizon-6775f97bdb-vs7m8" containerMessage="Container horizon failed startup probe, will be restarted" Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.067585 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6775f97bdb-vs7m8" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" containerID="cri-o://0874941382f63e0c82cae2a5dc94543aa96b15309c19c42f17c114109f943016" gracePeriod=30 Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.143667 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75479c6864-2fvz5" podUID="000656f6-99fd-43a3-8ade-31b200d0c18a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.143799 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.145762 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"1160013da08ee75c7c8e797e0ae1954be47f186459a81de5a80b7e4e58d7a6e8"} pod="openstack/horizon-75479c6864-2fvz5" containerMessage="Container horizon failed startup probe, will be restarted" Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.149001 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75479c6864-2fvz5" podUID="000656f6-99fd-43a3-8ade-31b200d0c18a" containerName="horizon" containerID="cri-o://1160013da08ee75c7c8e797e0ae1954be47f186459a81de5a80b7e4e58d7a6e8" gracePeriod=30 Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.198120 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.293881 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zddv7"] Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.294208 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-zddv7" podUID="35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" containerName="dnsmasq-dns" containerID="cri-o://16aeea625e8805f7e04e40c09f7e2e48d8f49c14ccad342d7a09212f30db7586" gracePeriod=10 Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.370539 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55788c59f6-zd5sp" Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.611515 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"adfbde90-8055-49f0-9ccb-83d1502332cd","Type":"ContainerStarted","Data":"ce9a163925c42d97cafe664a0a5572af7e145efb29a6de9b71715d86cd66a9cc"} Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.637622 4933 generic.go:334] "Generic (PLEG): container finished" podID="35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" containerID="16aeea625e8805f7e04e40c09f7e2e48d8f49c14ccad342d7a09212f30db7586" exitCode=0 Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.638962 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zddv7" event={"ID":"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d","Type":"ContainerDied","Data":"16aeea625e8805f7e04e40c09f7e2e48d8f49c14ccad342d7a09212f30db7586"} Dec 01 09:52:34 crc kubenswrapper[4933]: I1201 09:52:34.639072 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.311842 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.464005 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-ovsdbserver-nb\") pod \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.464122 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-config\") pod \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.464188 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-dns-svc\") pod \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.464353 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-dns-swift-storage-0\") pod \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.464415 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsvpc\" (UniqueName: \"kubernetes.io/projected/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-kube-api-access-dsvpc\") pod \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.464434 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-ovsdbserver-sb\") pod \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\" (UID: \"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d\") " Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.528362 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-kube-api-access-dsvpc" (OuterVolumeSpecName: "kube-api-access-dsvpc") pod "35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" (UID: "35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d"). InnerVolumeSpecName "kube-api-access-dsvpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.560505 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" (UID: "35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.567811 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsvpc\" (UniqueName: \"kubernetes.io/projected/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-kube-api-access-dsvpc\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.567854 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.629700 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" (UID: "35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.630038 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" (UID: "35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.654676 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"adfbde90-8055-49f0-9ccb-83d1502332cd","Type":"ContainerStarted","Data":"6e3d9279f4751273625df711c2d0d5cd62d7b0f9c7ce74f90bcff3eadcc5c5b7"} Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.657130 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" (UID: "35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.658650 4933 generic.go:334] "Generic (PLEG): container finished" podID="ecffe4c0-9de1-418c-b57e-d0484fb89482" containerID="b39c4bcf1ee7c55db12b2b7f455dc62fc933e108cf52cde8ca2d93d667fde979" exitCode=0 Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.658757 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cbb488fb-wvfw6" event={"ID":"ecffe4c0-9de1-418c-b57e-d0484fb89482","Type":"ContainerDied","Data":"b39c4bcf1ee7c55db12b2b7f455dc62fc933e108cf52cde8ca2d93d667fde979"} Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.660061 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-config" (OuterVolumeSpecName: "config") pod "35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" (UID: "35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.669427 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.669462 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.669476 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.669489 4933 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.674418 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zddv7" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.703707 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.712080 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zddv7" event={"ID":"35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d","Type":"ContainerDied","Data":"ae7d2e19857cdcaecbc89f76cdd5c911e95901a9d59b6699c4564d560e0bf657"} Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.712158 4933 scope.go:117] "RemoveContainer" containerID="16aeea625e8805f7e04e40c09f7e2e48d8f49c14ccad342d7a09212f30db7586" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.769338 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zddv7"] Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.830507 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zddv7"] Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.835867 4933 scope.go:117] "RemoveContainer" containerID="9e19d89474e22a0bb8d1f0a3b7197733be223be8be825d8da964ea2c91f8db4f" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.874165 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-config\") pod \"ecffe4c0-9de1-418c-b57e-d0484fb89482\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.874796 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-httpd-config\") pod \"ecffe4c0-9de1-418c-b57e-d0484fb89482\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.874896 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2qgz\" (UniqueName: \"kubernetes.io/projected/ecffe4c0-9de1-418c-b57e-d0484fb89482-kube-api-access-n2qgz\") pod \"ecffe4c0-9de1-418c-b57e-d0484fb89482\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.875123 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-ovndb-tls-certs\") pod \"ecffe4c0-9de1-418c-b57e-d0484fb89482\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.875378 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-combined-ca-bundle\") pod \"ecffe4c0-9de1-418c-b57e-d0484fb89482\" (UID: \"ecffe4c0-9de1-418c-b57e-d0484fb89482\") " Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.889695 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ecffe4c0-9de1-418c-b57e-d0484fb89482" (UID: "ecffe4c0-9de1-418c-b57e-d0484fb89482"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.900439 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecffe4c0-9de1-418c-b57e-d0484fb89482-kube-api-access-n2qgz" (OuterVolumeSpecName: "kube-api-access-n2qgz") pod "ecffe4c0-9de1-418c-b57e-d0484fb89482" (UID: "ecffe4c0-9de1-418c-b57e-d0484fb89482"). InnerVolumeSpecName "kube-api-access-n2qgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.981915 4933 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:35 crc kubenswrapper[4933]: I1201 09:52:35.981976 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2qgz\" (UniqueName: \"kubernetes.io/projected/ecffe4c0-9de1-418c-b57e-d0484fb89482-kube-api-access-n2qgz\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.000536 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecffe4c0-9de1-418c-b57e-d0484fb89482" (UID: "ecffe4c0-9de1-418c-b57e-d0484fb89482"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.010684 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-config" (OuterVolumeSpecName: "config") pod "ecffe4c0-9de1-418c-b57e-d0484fb89482" (UID: "ecffe4c0-9de1-418c-b57e-d0484fb89482"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.024824 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ecffe4c0-9de1-418c-b57e-d0484fb89482" (UID: "ecffe4c0-9de1-418c-b57e-d0484fb89482"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.084060 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.084113 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.084128 4933 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecffe4c0-9de1-418c-b57e-d0484fb89482-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.537179 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.685818 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66cbb488fb-wvfw6" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.686637 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cbb488fb-wvfw6" event={"ID":"ecffe4c0-9de1-418c-b57e-d0484fb89482","Type":"ContainerDied","Data":"758461133e6ac20b1fdd3d92a9a9627415e40418dcb6ae152556415904707280"} Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.686816 4933 scope.go:117] "RemoveContainer" containerID="ad68bf3884fdf1eba62ef989dcb28890bf939cf689eba4b31551a91adbb028e9" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.691691 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"adfbde90-8055-49f0-9ccb-83d1502332cd","Type":"ContainerStarted","Data":"645faccea65f8d563fea2a5c7552b245e773b0baaddaa8ef54303d053354e33d"} Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.692223 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.727998 4933 scope.go:117] "RemoveContainer" containerID="b39c4bcf1ee7c55db12b2b7f455dc62fc933e108cf52cde8ca2d93d667fde979" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.734404 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.73437166 podStartE2EDuration="4.73437166s" podCreationTimestamp="2025-12-01 09:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:52:36.731193302 +0000 UTC m=+1247.372916917" watchObservedRunningTime="2025-12-01 09:52:36.73437166 +0000 UTC m=+1247.376095275" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.760454 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.793692 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66cbb488fb-wvfw6"] Dec 01 09:52:36 crc kubenswrapper[4933]: I1201 09:52:36.858793 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-66cbb488fb-wvfw6"] Dec 01 09:52:37 crc kubenswrapper[4933]: I1201 09:52:37.758880 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" path="/var/lib/kubelet/pods/35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d/volumes" Dec 01 09:52:37 crc kubenswrapper[4933]: I1201 09:52:37.760972 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecffe4c0-9de1-418c-b57e-d0484fb89482" path="/var/lib/kubelet/pods/ecffe4c0-9de1-418c-b57e-d0484fb89482/volumes" Dec 01 09:52:37 crc kubenswrapper[4933]: I1201 09:52:37.791420 4933 generic.go:334] "Generic (PLEG): container finished" podID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerID="0874941382f63e0c82cae2a5dc94543aa96b15309c19c42f17c114109f943016" exitCode=0 Dec 01 09:52:37 crc kubenswrapper[4933]: I1201 09:52:37.793609 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6775f97bdb-vs7m8" event={"ID":"6ffcfb41-8086-4e28-b88a-da47dd38a844","Type":"ContainerDied","Data":"0874941382f63e0c82cae2a5dc94543aa96b15309c19c42f17c114109f943016"} Dec 01 09:52:37 crc kubenswrapper[4933]: I1201 09:52:37.793666 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6775f97bdb-vs7m8" event={"ID":"6ffcfb41-8086-4e28-b88a-da47dd38a844","Type":"ContainerStarted","Data":"4939aef99ded42615cef4e9d95fccaaae2f1009f74245945b7589fb343808301"} Dec 01 09:52:38 crc kubenswrapper[4933]: I1201 09:52:38.808396 4933 generic.go:334] "Generic (PLEG): container finished" podID="000656f6-99fd-43a3-8ade-31b200d0c18a" containerID="1160013da08ee75c7c8e797e0ae1954be47f186459a81de5a80b7e4e58d7a6e8" exitCode=0 Dec 01 09:52:38 crc kubenswrapper[4933]: I1201 09:52:38.808634 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75479c6864-2fvz5" event={"ID":"000656f6-99fd-43a3-8ade-31b200d0c18a","Type":"ContainerDied","Data":"1160013da08ee75c7c8e797e0ae1954be47f186459a81de5a80b7e4e58d7a6e8"} Dec 01 09:52:38 crc kubenswrapper[4933]: I1201 09:52:38.808940 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75479c6864-2fvz5" event={"ID":"000656f6-99fd-43a3-8ade-31b200d0c18a","Type":"ContainerStarted","Data":"0a95a99de90b97b31b9b8117e17e5922efffb1a03d6dbc4305948c3ef29db209"} Dec 01 09:52:39 crc kubenswrapper[4933]: I1201 09:52:39.059617 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:52:39 crc kubenswrapper[4933]: I1201 09:52:39.059684 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:52:39 crc kubenswrapper[4933]: I1201 09:52:39.135702 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:52:39 crc kubenswrapper[4933]: I1201 09:52:39.135775 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:52:39 crc kubenswrapper[4933]: I1201 09:52:39.366837 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 01 09:52:39 crc kubenswrapper[4933]: I1201 09:52:39.412404 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 09:52:39 crc kubenswrapper[4933]: I1201 09:52:39.820991 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e653c565-dc91-44a0-956f-ca5e840b47e6" containerName="cinder-scheduler" containerID="cri-o://d2dd4324b83c4bd4c8b595d1dcd2760987e00ef8dd28446061ec16eca290d1b2" gracePeriod=30 Dec 01 09:52:39 crc kubenswrapper[4933]: I1201 09:52:39.821023 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e653c565-dc91-44a0-956f-ca5e840b47e6" containerName="probe" containerID="cri-o://2e6363d795ea8046980eff7c981cf329b989ddce9819341a87bc1a41413fbdfc" gracePeriod=30 Dec 01 09:52:40 crc kubenswrapper[4933]: I1201 09:52:40.663083 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6fd8c5dc6c-czndt" Dec 01 09:52:40 crc kubenswrapper[4933]: I1201 09:52:40.859930 4933 generic.go:334] "Generic (PLEG): container finished" podID="e653c565-dc91-44a0-956f-ca5e840b47e6" containerID="2e6363d795ea8046980eff7c981cf329b989ddce9819341a87bc1a41413fbdfc" exitCode=0 Dec 01 09:52:40 crc kubenswrapper[4933]: I1201 09:52:40.859984 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e653c565-dc91-44a0-956f-ca5e840b47e6","Type":"ContainerDied","Data":"2e6363d795ea8046980eff7c981cf329b989ddce9819341a87bc1a41413fbdfc"} Dec 01 09:52:41 crc kubenswrapper[4933]: I1201 09:52:41.742453 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:52:41 crc kubenswrapper[4933]: I1201 09:52:41.742939 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:52:41 crc kubenswrapper[4933]: I1201 09:52:41.755223 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:52:41 crc kubenswrapper[4933]: I1201 09:52:41.790841 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a07b0704942b02814f9e2cbae890ab2665cd7af25f4178c9003a8e4c8ac846a"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:52:41 crc kubenswrapper[4933]: I1201 09:52:41.790959 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://9a07b0704942b02814f9e2cbae890ab2665cd7af25f4178c9003a8e4c8ac846a" gracePeriod=600 Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.060694 4933 generic.go:334] "Generic (PLEG): container finished" podID="e653c565-dc91-44a0-956f-ca5e840b47e6" containerID="d2dd4324b83c4bd4c8b595d1dcd2760987e00ef8dd28446061ec16eca290d1b2" exitCode=0 Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.060759 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e653c565-dc91-44a0-956f-ca5e840b47e6","Type":"ContainerDied","Data":"d2dd4324b83c4bd4c8b595d1dcd2760987e00ef8dd28446061ec16eca290d1b2"} Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.297555 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 09:52:42 crc kubenswrapper[4933]: E1201 09:52:42.377962 4933 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31deca5a_8ffe_4967_b02f_98a2043ddb23.slice/crio-9a07b0704942b02814f9e2cbae890ab2665cd7af25f4178c9003a8e4c8ac846a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31deca5a_8ffe_4967_b02f_98a2043ddb23.slice/crio-conmon-9a07b0704942b02814f9e2cbae890ab2665cd7af25f4178c9003a8e4c8ac846a.scope\": RecentStats: unable to find data in memory cache]" Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.448506 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwq68\" (UniqueName: \"kubernetes.io/projected/e653c565-dc91-44a0-956f-ca5e840b47e6-kube-api-access-gwq68\") pod \"e653c565-dc91-44a0-956f-ca5e840b47e6\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.449134 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-combined-ca-bundle\") pod \"e653c565-dc91-44a0-956f-ca5e840b47e6\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.449180 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-scripts\") pod \"e653c565-dc91-44a0-956f-ca5e840b47e6\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.449369 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-config-data\") pod \"e653c565-dc91-44a0-956f-ca5e840b47e6\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.449403 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e653c565-dc91-44a0-956f-ca5e840b47e6-etc-machine-id\") pod \"e653c565-dc91-44a0-956f-ca5e840b47e6\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.449446 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-config-data-custom\") pod \"e653c565-dc91-44a0-956f-ca5e840b47e6\" (UID: \"e653c565-dc91-44a0-956f-ca5e840b47e6\") " Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.451940 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e653c565-dc91-44a0-956f-ca5e840b47e6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e653c565-dc91-44a0-956f-ca5e840b47e6" (UID: "e653c565-dc91-44a0-956f-ca5e840b47e6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.473563 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e653c565-dc91-44a0-956f-ca5e840b47e6-kube-api-access-gwq68" (OuterVolumeSpecName: "kube-api-access-gwq68") pod "e653c565-dc91-44a0-956f-ca5e840b47e6" (UID: "e653c565-dc91-44a0-956f-ca5e840b47e6"). InnerVolumeSpecName "kube-api-access-gwq68". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.480252 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e653c565-dc91-44a0-956f-ca5e840b47e6" (UID: "e653c565-dc91-44a0-956f-ca5e840b47e6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.482430 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-scripts" (OuterVolumeSpecName: "scripts") pod "e653c565-dc91-44a0-956f-ca5e840b47e6" (UID: "e653c565-dc91-44a0-956f-ca5e840b47e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.552656 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.552681 4933 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e653c565-dc91-44a0-956f-ca5e840b47e6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.552692 4933 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.552701 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwq68\" (UniqueName: \"kubernetes.io/projected/e653c565-dc91-44a0-956f-ca5e840b47e6-kube-api-access-gwq68\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.568970 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e653c565-dc91-44a0-956f-ca5e840b47e6" (UID: "e653c565-dc91-44a0-956f-ca5e840b47e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.654348 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.668987 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-config-data" (OuterVolumeSpecName: "config-data") pod "e653c565-dc91-44a0-956f-ca5e840b47e6" (UID: "e653c565-dc91-44a0-956f-ca5e840b47e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.690949 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:42 crc kubenswrapper[4933]: I1201 09:52:42.756065 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e653c565-dc91-44a0-956f-ca5e840b47e6-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.077983 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="9a07b0704942b02814f9e2cbae890ab2665cd7af25f4178c9003a8e4c8ac846a" exitCode=0 Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.078050 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"9a07b0704942b02814f9e2cbae890ab2665cd7af25f4178c9003a8e4c8ac846a"} Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.079746 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"8505b08ab0c6a32f9d9b3cdadd9a40ce10f6aaa716925824a170840b097c0cb7"} Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.079852 4933 scope.go:117] "RemoveContainer" containerID="9380cff48ee91161c6b7a930159a88a7b204cb44f727f0c73879abbb5f388b3e" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.089837 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e653c565-dc91-44a0-956f-ca5e840b47e6","Type":"ContainerDied","Data":"27a5c9b23330dd43ea3dda8547b01d3b42203080cbc5490db1e3d41f853b86e9"} Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.090002 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.120618 4933 scope.go:117] "RemoveContainer" containerID="2e6363d795ea8046980eff7c981cf329b989ddce9819341a87bc1a41413fbdfc" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.172148 4933 scope.go:117] "RemoveContainer" containerID="d2dd4324b83c4bd4c8b595d1dcd2760987e00ef8dd28446061ec16eca290d1b2" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.174105 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76fbd6d8c8-kdqkn" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.182112 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.213444 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.229948 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 09:52:43 crc kubenswrapper[4933]: E1201 09:52:43.230736 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecffe4c0-9de1-418c-b57e-d0484fb89482" containerName="neutron-httpd" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.230771 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecffe4c0-9de1-418c-b57e-d0484fb89482" containerName="neutron-httpd" Dec 01 09:52:43 crc kubenswrapper[4933]: E1201 09:52:43.230815 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecffe4c0-9de1-418c-b57e-d0484fb89482" containerName="neutron-api" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.230825 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecffe4c0-9de1-418c-b57e-d0484fb89482" containerName="neutron-api" Dec 01 09:52:43 crc kubenswrapper[4933]: E1201 09:52:43.230859 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e653c565-dc91-44a0-956f-ca5e840b47e6" containerName="probe" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.230873 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="e653c565-dc91-44a0-956f-ca5e840b47e6" containerName="probe" Dec 01 09:52:43 crc kubenswrapper[4933]: E1201 09:52:43.230904 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e653c565-dc91-44a0-956f-ca5e840b47e6" containerName="cinder-scheduler" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.230913 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="e653c565-dc91-44a0-956f-ca5e840b47e6" containerName="cinder-scheduler" Dec 01 09:52:43 crc kubenswrapper[4933]: E1201 09:52:43.230934 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" containerName="init" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.230946 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" containerName="init" Dec 01 09:52:43 crc kubenswrapper[4933]: E1201 09:52:43.230997 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" containerName="dnsmasq-dns" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.231012 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" containerName="dnsmasq-dns" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.231354 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f9fbcc-e838-4bd0-a05c-a54e13e0ff5d" containerName="dnsmasq-dns" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.231390 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecffe4c0-9de1-418c-b57e-d0484fb89482" containerName="neutron-httpd" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.231419 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="e653c565-dc91-44a0-956f-ca5e840b47e6" containerName="cinder-scheduler" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.231443 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecffe4c0-9de1-418c-b57e-d0484fb89482" containerName="neutron-api" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.231459 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="e653c565-dc91-44a0-956f-ca5e840b47e6" containerName="probe" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.244840 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.252458 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.253981 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.302850 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-587d6fd9d4-mtz4n"] Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.303103 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-587d6fd9d4-mtz4n" podUID="db6a88f7-d4dd-4a87-88f5-499bb9f5d377" containerName="barbican-api-log" containerID="cri-o://f0cf57c1e8184f0bc31dc2e70a2650d0fc6c2e34cf642fe12952dd0d92b803bc" gracePeriod=30 Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.303572 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-587d6fd9d4-mtz4n" podUID="db6a88f7-d4dd-4a87-88f5-499bb9f5d377" containerName="barbican-api" containerID="cri-o://1a2890845011c5bb2a175efb25a11484ad26affcf86f62d84312cfbefb673da9" gracePeriod=30 Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.378321 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da9e98c6-6da7-4082-8e5d-f8e571486e96-scripts\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.379439 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da9e98c6-6da7-4082-8e5d-f8e571486e96-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.379543 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9e98c6-6da7-4082-8e5d-f8e571486e96-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.379676 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da9e98c6-6da7-4082-8e5d-f8e571486e96-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.380163 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvnc8\" (UniqueName: \"kubernetes.io/projected/da9e98c6-6da7-4082-8e5d-f8e571486e96-kube-api-access-lvnc8\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.380270 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9e98c6-6da7-4082-8e5d-f8e571486e96-config-data\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.484757 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvnc8\" (UniqueName: \"kubernetes.io/projected/da9e98c6-6da7-4082-8e5d-f8e571486e96-kube-api-access-lvnc8\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.484847 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9e98c6-6da7-4082-8e5d-f8e571486e96-config-data\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.484893 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da9e98c6-6da7-4082-8e5d-f8e571486e96-scripts\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.484920 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da9e98c6-6da7-4082-8e5d-f8e571486e96-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.484941 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9e98c6-6da7-4082-8e5d-f8e571486e96-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.484993 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da9e98c6-6da7-4082-8e5d-f8e571486e96-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.486882 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da9e98c6-6da7-4082-8e5d-f8e571486e96-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.499266 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da9e98c6-6da7-4082-8e5d-f8e571486e96-scripts\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.499390 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da9e98c6-6da7-4082-8e5d-f8e571486e96-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.512097 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9e98c6-6da7-4082-8e5d-f8e571486e96-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.513541 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9e98c6-6da7-4082-8e5d-f8e571486e96-config-data\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.534151 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvnc8\" (UniqueName: \"kubernetes.io/projected/da9e98c6-6da7-4082-8e5d-f8e571486e96-kube-api-access-lvnc8\") pod \"cinder-scheduler-0\" (UID: \"da9e98c6-6da7-4082-8e5d-f8e571486e96\") " pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.591286 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 09:52:43 crc kubenswrapper[4933]: I1201 09:52:43.774740 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e653c565-dc91-44a0-956f-ca5e840b47e6" path="/var/lib/kubelet/pods/e653c565-dc91-44a0-956f-ca5e840b47e6/volumes" Dec 01 09:52:44 crc kubenswrapper[4933]: I1201 09:52:44.134294 4933 generic.go:334] "Generic (PLEG): container finished" podID="db6a88f7-d4dd-4a87-88f5-499bb9f5d377" containerID="f0cf57c1e8184f0bc31dc2e70a2650d0fc6c2e34cf642fe12952dd0d92b803bc" exitCode=143 Dec 01 09:52:44 crc kubenswrapper[4933]: I1201 09:52:44.134718 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-587d6fd9d4-mtz4n" event={"ID":"db6a88f7-d4dd-4a87-88f5-499bb9f5d377","Type":"ContainerDied","Data":"f0cf57c1e8184f0bc31dc2e70a2650d0fc6c2e34cf642fe12952dd0d92b803bc"} Dec 01 09:52:44 crc kubenswrapper[4933]: I1201 09:52:44.398416 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.261543 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da9e98c6-6da7-4082-8e5d-f8e571486e96","Type":"ContainerStarted","Data":"6740baee6f6ba164e337f542b699aad25c63da624b7add218dc8805e948c3e91"} Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.650402 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.652195 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.658835 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xgwjm" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.659116 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.659259 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.712432 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.764956 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7ad829f6-4a62-4ed2-a99c-30aed564d585-openstack-config-secret\") pod \"openstackclient\" (UID: \"7ad829f6-4a62-4ed2-a99c-30aed564d585\") " pod="openstack/openstackclient" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.765022 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad829f6-4a62-4ed2-a99c-30aed564d585-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7ad829f6-4a62-4ed2-a99c-30aed564d585\") " pod="openstack/openstackclient" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.765113 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7ad829f6-4a62-4ed2-a99c-30aed564d585-openstack-config\") pod \"openstackclient\" (UID: \"7ad829f6-4a62-4ed2-a99c-30aed564d585\") " pod="openstack/openstackclient" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.765165 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmlkd\" (UniqueName: \"kubernetes.io/projected/7ad829f6-4a62-4ed2-a99c-30aed564d585-kube-api-access-dmlkd\") pod \"openstackclient\" (UID: \"7ad829f6-4a62-4ed2-a99c-30aed564d585\") " pod="openstack/openstackclient" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.872567 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad829f6-4a62-4ed2-a99c-30aed564d585-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7ad829f6-4a62-4ed2-a99c-30aed564d585\") " pod="openstack/openstackclient" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.872717 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7ad829f6-4a62-4ed2-a99c-30aed564d585-openstack-config\") pod \"openstackclient\" (UID: \"7ad829f6-4a62-4ed2-a99c-30aed564d585\") " pod="openstack/openstackclient" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.872772 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmlkd\" (UniqueName: \"kubernetes.io/projected/7ad829f6-4a62-4ed2-a99c-30aed564d585-kube-api-access-dmlkd\") pod \"openstackclient\" (UID: \"7ad829f6-4a62-4ed2-a99c-30aed564d585\") " pod="openstack/openstackclient" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.872859 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7ad829f6-4a62-4ed2-a99c-30aed564d585-openstack-config-secret\") pod \"openstackclient\" (UID: \"7ad829f6-4a62-4ed2-a99c-30aed564d585\") " pod="openstack/openstackclient" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.874108 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7ad829f6-4a62-4ed2-a99c-30aed564d585-openstack-config\") pod \"openstackclient\" (UID: \"7ad829f6-4a62-4ed2-a99c-30aed564d585\") " pod="openstack/openstackclient" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.881884 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7ad829f6-4a62-4ed2-a99c-30aed564d585-openstack-config-secret\") pod \"openstackclient\" (UID: \"7ad829f6-4a62-4ed2-a99c-30aed564d585\") " pod="openstack/openstackclient" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.889064 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad829f6-4a62-4ed2-a99c-30aed564d585-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7ad829f6-4a62-4ed2-a99c-30aed564d585\") " pod="openstack/openstackclient" Dec 01 09:52:45 crc kubenswrapper[4933]: I1201 09:52:45.892924 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmlkd\" (UniqueName: \"kubernetes.io/projected/7ad829f6-4a62-4ed2-a99c-30aed564d585-kube-api-access-dmlkd\") pod \"openstackclient\" (UID: \"7ad829f6-4a62-4ed2-a99c-30aed564d585\") " pod="openstack/openstackclient" Dec 01 09:52:46 crc kubenswrapper[4933]: I1201 09:52:46.085811 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 01 09:52:46 crc kubenswrapper[4933]: I1201 09:52:46.298688 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da9e98c6-6da7-4082-8e5d-f8e571486e96","Type":"ContainerStarted","Data":"9bda331129e23ea9cc9084b5cc4c9c139701b1296adeebed80fe5b475753a54c"} Dec 01 09:52:46 crc kubenswrapper[4933]: I1201 09:52:46.731332 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.114857 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.278034 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.320971 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da9e98c6-6da7-4082-8e5d-f8e571486e96","Type":"ContainerStarted","Data":"eeb70e603f103d26ff9d1d9f15bdcfcded5defb585889030e6a3347bf89a5d69"} Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.325829 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-config-data\") pod \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.325906 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-logs\") pod \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.325954 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckhql\" (UniqueName: \"kubernetes.io/projected/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-kube-api-access-ckhql\") pod \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.326123 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-config-data-custom\") pod \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.326153 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-combined-ca-bundle\") pod \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\" (UID: \"db6a88f7-d4dd-4a87-88f5-499bb9f5d377\") " Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.327058 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-logs" (OuterVolumeSpecName: "logs") pod "db6a88f7-d4dd-4a87-88f5-499bb9f5d377" (UID: "db6a88f7-d4dd-4a87-88f5-499bb9f5d377"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.337273 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.339555 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-kube-api-access-ckhql" (OuterVolumeSpecName: "kube-api-access-ckhql") pod "db6a88f7-d4dd-4a87-88f5-499bb9f5d377" (UID: "db6a88f7-d4dd-4a87-88f5-499bb9f5d377"). InnerVolumeSpecName "kube-api-access-ckhql". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.360926 4933 generic.go:334] "Generic (PLEG): container finished" podID="db6a88f7-d4dd-4a87-88f5-499bb9f5d377" containerID="1a2890845011c5bb2a175efb25a11484ad26affcf86f62d84312cfbefb673da9" exitCode=0 Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.360999 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-587d6fd9d4-mtz4n" event={"ID":"db6a88f7-d4dd-4a87-88f5-499bb9f5d377","Type":"ContainerDied","Data":"1a2890845011c5bb2a175efb25a11484ad26affcf86f62d84312cfbefb673da9"} Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.361029 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-587d6fd9d4-mtz4n" event={"ID":"db6a88f7-d4dd-4a87-88f5-499bb9f5d377","Type":"ContainerDied","Data":"7b99dbda9bb50af65804a7c5e6b4d0d151c5d054154907357c78df61e1ad7434"} Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.361049 4933 scope.go:117] "RemoveContainer" containerID="1a2890845011c5bb2a175efb25a11484ad26affcf86f62d84312cfbefb673da9" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.361195 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-587d6fd9d4-mtz4n" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.367401 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "db6a88f7-d4dd-4a87-88f5-499bb9f5d377" (UID: "db6a88f7-d4dd-4a87-88f5-499bb9f5d377"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.367658 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7ad829f6-4a62-4ed2-a99c-30aed564d585","Type":"ContainerStarted","Data":"7631f6916e161d1a8aa7564c5a13c035fca47ee5810ff856ff0e9476a4487691"} Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.371430 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.371406618 podStartE2EDuration="4.371406618s" podCreationTimestamp="2025-12-01 09:52:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:52:47.360017629 +0000 UTC m=+1258.001741244" watchObservedRunningTime="2025-12-01 09:52:47.371406618 +0000 UTC m=+1258.013130233" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.397489 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db6a88f7-d4dd-4a87-88f5-499bb9f5d377" (UID: "db6a88f7-d4dd-4a87-88f5-499bb9f5d377"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.417429 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-config-data" (OuterVolumeSpecName: "config-data") pod "db6a88f7-d4dd-4a87-88f5-499bb9f5d377" (UID: "db6a88f7-d4dd-4a87-88f5-499bb9f5d377"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.434674 4933 scope.go:117] "RemoveContainer" containerID="f0cf57c1e8184f0bc31dc2e70a2650d0fc6c2e34cf642fe12952dd0d92b803bc" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.458663 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.458702 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckhql\" (UniqueName: \"kubernetes.io/projected/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-kube-api-access-ckhql\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.458715 4933 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.458727 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db6a88f7-d4dd-4a87-88f5-499bb9f5d377-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.481882 4933 scope.go:117] "RemoveContainer" containerID="1a2890845011c5bb2a175efb25a11484ad26affcf86f62d84312cfbefb673da9" Dec 01 09:52:47 crc kubenswrapper[4933]: E1201 09:52:47.485472 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a2890845011c5bb2a175efb25a11484ad26affcf86f62d84312cfbefb673da9\": container with ID starting with 1a2890845011c5bb2a175efb25a11484ad26affcf86f62d84312cfbefb673da9 not found: ID does not exist" containerID="1a2890845011c5bb2a175efb25a11484ad26affcf86f62d84312cfbefb673da9" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.485543 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a2890845011c5bb2a175efb25a11484ad26affcf86f62d84312cfbefb673da9"} err="failed to get container status \"1a2890845011c5bb2a175efb25a11484ad26affcf86f62d84312cfbefb673da9\": rpc error: code = NotFound desc = could not find container \"1a2890845011c5bb2a175efb25a11484ad26affcf86f62d84312cfbefb673da9\": container with ID starting with 1a2890845011c5bb2a175efb25a11484ad26affcf86f62d84312cfbefb673da9 not found: ID does not exist" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.485577 4933 scope.go:117] "RemoveContainer" containerID="f0cf57c1e8184f0bc31dc2e70a2650d0fc6c2e34cf642fe12952dd0d92b803bc" Dec 01 09:52:47 crc kubenswrapper[4933]: E1201 09:52:47.487871 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0cf57c1e8184f0bc31dc2e70a2650d0fc6c2e34cf642fe12952dd0d92b803bc\": container with ID starting with f0cf57c1e8184f0bc31dc2e70a2650d0fc6c2e34cf642fe12952dd0d92b803bc not found: ID does not exist" containerID="f0cf57c1e8184f0bc31dc2e70a2650d0fc6c2e34cf642fe12952dd0d92b803bc" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.487940 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0cf57c1e8184f0bc31dc2e70a2650d0fc6c2e34cf642fe12952dd0d92b803bc"} err="failed to get container status \"f0cf57c1e8184f0bc31dc2e70a2650d0fc6c2e34cf642fe12952dd0d92b803bc\": rpc error: code = NotFound desc = could not find container \"f0cf57c1e8184f0bc31dc2e70a2650d0fc6c2e34cf642fe12952dd0d92b803bc\": container with ID starting with f0cf57c1e8184f0bc31dc2e70a2650d0fc6c2e34cf642fe12952dd0d92b803bc not found: ID does not exist" Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.713394 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-587d6fd9d4-mtz4n"] Dec 01 09:52:47 crc kubenswrapper[4933]: I1201 09:52:47.731472 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-587d6fd9d4-mtz4n"] Dec 01 09:52:48 crc kubenswrapper[4933]: I1201 09:52:48.591783 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.062193 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6775f97bdb-vs7m8" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.146694 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75479c6864-2fvz5" podUID="000656f6-99fd-43a3-8ade-31b200d0c18a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.394760 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5c69467867-495s4"] Dec 01 09:52:49 crc kubenswrapper[4933]: E1201 09:52:49.395873 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db6a88f7-d4dd-4a87-88f5-499bb9f5d377" containerName="barbican-api" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.395901 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="db6a88f7-d4dd-4a87-88f5-499bb9f5d377" containerName="barbican-api" Dec 01 09:52:49 crc kubenswrapper[4933]: E1201 09:52:49.395915 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db6a88f7-d4dd-4a87-88f5-499bb9f5d377" containerName="barbican-api-log" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.395922 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="db6a88f7-d4dd-4a87-88f5-499bb9f5d377" containerName="barbican-api-log" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.396130 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="db6a88f7-d4dd-4a87-88f5-499bb9f5d377" containerName="barbican-api" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.396153 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="db6a88f7-d4dd-4a87-88f5-499bb9f5d377" containerName="barbican-api-log" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.397162 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.400828 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.401024 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.401134 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.480337 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5c69467867-495s4"] Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.508998 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ceb0496-c824-4d20-8a63-43bc6aa47f97-combined-ca-bundle\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.509060 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ceb0496-c824-4d20-8a63-43bc6aa47f97-run-httpd\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.509105 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4ceb0496-c824-4d20-8a63-43bc6aa47f97-etc-swift\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.509123 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ceb0496-c824-4d20-8a63-43bc6aa47f97-internal-tls-certs\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.509180 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ceb0496-c824-4d20-8a63-43bc6aa47f97-log-httpd\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.509198 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ceb0496-c824-4d20-8a63-43bc6aa47f97-config-data\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.509272 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ceb0496-c824-4d20-8a63-43bc6aa47f97-public-tls-certs\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.509435 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p27xn\" (UniqueName: \"kubernetes.io/projected/4ceb0496-c824-4d20-8a63-43bc6aa47f97-kube-api-access-p27xn\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.611722 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ceb0496-c824-4d20-8a63-43bc6aa47f97-combined-ca-bundle\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.611780 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ceb0496-c824-4d20-8a63-43bc6aa47f97-run-httpd\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.611838 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4ceb0496-c824-4d20-8a63-43bc6aa47f97-etc-swift\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.611869 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ceb0496-c824-4d20-8a63-43bc6aa47f97-internal-tls-certs\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.611933 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ceb0496-c824-4d20-8a63-43bc6aa47f97-log-httpd\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.611957 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ceb0496-c824-4d20-8a63-43bc6aa47f97-config-data\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.612021 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ceb0496-c824-4d20-8a63-43bc6aa47f97-public-tls-certs\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.612082 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p27xn\" (UniqueName: \"kubernetes.io/projected/4ceb0496-c824-4d20-8a63-43bc6aa47f97-kube-api-access-p27xn\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.615657 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ceb0496-c824-4d20-8a63-43bc6aa47f97-run-httpd\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.616042 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ceb0496-c824-4d20-8a63-43bc6aa47f97-log-httpd\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.621898 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ceb0496-c824-4d20-8a63-43bc6aa47f97-public-tls-certs\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.624271 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ceb0496-c824-4d20-8a63-43bc6aa47f97-combined-ca-bundle\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.629081 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ceb0496-c824-4d20-8a63-43bc6aa47f97-config-data\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.631874 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p27xn\" (UniqueName: \"kubernetes.io/projected/4ceb0496-c824-4d20-8a63-43bc6aa47f97-kube-api-access-p27xn\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.637453 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ceb0496-c824-4d20-8a63-43bc6aa47f97-internal-tls-certs\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.687120 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4ceb0496-c824-4d20-8a63-43bc6aa47f97-etc-swift\") pod \"swift-proxy-5c69467867-495s4\" (UID: \"4ceb0496-c824-4d20-8a63-43bc6aa47f97\") " pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.734653 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:49 crc kubenswrapper[4933]: I1201 09:52:49.740027 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db6a88f7-d4dd-4a87-88f5-499bb9f5d377" path="/var/lib/kubelet/pods/db6a88f7-d4dd-4a87-88f5-499bb9f5d377/volumes" Dec 01 09:52:50 crc kubenswrapper[4933]: I1201 09:52:50.467761 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5c69467867-495s4"] Dec 01 09:52:51 crc kubenswrapper[4933]: I1201 09:52:51.430673 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c69467867-495s4" event={"ID":"4ceb0496-c824-4d20-8a63-43bc6aa47f97","Type":"ContainerStarted","Data":"e619b0d170ef4de4ec07d5a74e2f8921fb9541f26db3e9648b2192d63548f240"} Dec 01 09:52:51 crc kubenswrapper[4933]: I1201 09:52:51.431595 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c69467867-495s4" event={"ID":"4ceb0496-c824-4d20-8a63-43bc6aa47f97","Type":"ContainerStarted","Data":"2cee1c35f935d783809e2da98a0bf8c990dd6e964a9831867ac09fe099db45dd"} Dec 01 09:52:51 crc kubenswrapper[4933]: I1201 09:52:51.431616 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c69467867-495s4" event={"ID":"4ceb0496-c824-4d20-8a63-43bc6aa47f97","Type":"ContainerStarted","Data":"859ccff7f743a9c96c253cfd17341ba2059412cb3a1965553f9459fef034ac6b"} Dec 01 09:52:52 crc kubenswrapper[4933]: I1201 09:52:52.457289 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:52 crc kubenswrapper[4933]: I1201 09:52:52.457377 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:52 crc kubenswrapper[4933]: I1201 09:52:52.469763 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:52:52 crc kubenswrapper[4933]: I1201 09:52:52.470819 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="ceilometer-central-agent" containerID="cri-o://68a0edf398f24c91dfb2d4c7654df6ef970fb66c7eab8b6a360a095b264f4dc1" gracePeriod=30 Dec 01 09:52:52 crc kubenswrapper[4933]: I1201 09:52:52.471033 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="proxy-httpd" containerID="cri-o://548e12572203ec08f31aeb72e7bf6312905452a77e95e1cb11d34c7b5a8c9661" gracePeriod=30 Dec 01 09:52:52 crc kubenswrapper[4933]: I1201 09:52:52.471979 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="ceilometer-notification-agent" containerID="cri-o://e755e38f445fdefd32cf04e17672fa97d04c6ce71929ad05233e802dcb8b2def" gracePeriod=30 Dec 01 09:52:52 crc kubenswrapper[4933]: I1201 09:52:52.471120 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="sg-core" containerID="cri-o://b0292fc49a23ce180dfbd438fac50a747201f9f1dc65e2c6deb7e0e71d0d6ce5" gracePeriod=30 Dec 01 09:52:52 crc kubenswrapper[4933]: I1201 09:52:52.508990 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5c69467867-495s4" podStartSLOduration=3.508959871 podStartE2EDuration="3.508959871s" podCreationTimestamp="2025-12-01 09:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:52:52.481998801 +0000 UTC m=+1263.123722416" watchObservedRunningTime="2025-12-01 09:52:52.508959871 +0000 UTC m=+1263.150683536" Dec 01 09:52:52 crc kubenswrapper[4933]: I1201 09:52:52.594460 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.160:3000/\": read tcp 10.217.0.2:48396->10.217.0.160:3000: read: connection reset by peer" Dec 01 09:52:52 crc kubenswrapper[4933]: E1201 09:52:52.755173 4933 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod694ae277_c7eb_4d3c_8125_b5fb7fa45b71.slice/crio-b0292fc49a23ce180dfbd438fac50a747201f9f1dc65e2c6deb7e0e71d0d6ce5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod694ae277_c7eb_4d3c_8125_b5fb7fa45b71.slice/crio-548e12572203ec08f31aeb72e7bf6312905452a77e95e1cb11d34c7b5a8c9661.scope\": RecentStats: unable to find data in memory cache]" Dec 01 09:52:53 crc kubenswrapper[4933]: I1201 09:52:53.480962 4933 generic.go:334] "Generic (PLEG): container finished" podID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerID="548e12572203ec08f31aeb72e7bf6312905452a77e95e1cb11d34c7b5a8c9661" exitCode=0 Dec 01 09:52:53 crc kubenswrapper[4933]: I1201 09:52:53.481014 4933 generic.go:334] "Generic (PLEG): container finished" podID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerID="b0292fc49a23ce180dfbd438fac50a747201f9f1dc65e2c6deb7e0e71d0d6ce5" exitCode=2 Dec 01 09:52:53 crc kubenswrapper[4933]: I1201 09:52:53.481028 4933 generic.go:334] "Generic (PLEG): container finished" podID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerID="68a0edf398f24c91dfb2d4c7654df6ef970fb66c7eab8b6a360a095b264f4dc1" exitCode=0 Dec 01 09:52:53 crc kubenswrapper[4933]: I1201 09:52:53.481003 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"694ae277-c7eb-4d3c-8125-b5fb7fa45b71","Type":"ContainerDied","Data":"548e12572203ec08f31aeb72e7bf6312905452a77e95e1cb11d34c7b5a8c9661"} Dec 01 09:52:53 crc kubenswrapper[4933]: I1201 09:52:53.481104 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"694ae277-c7eb-4d3c-8125-b5fb7fa45b71","Type":"ContainerDied","Data":"b0292fc49a23ce180dfbd438fac50a747201f9f1dc65e2c6deb7e0e71d0d6ce5"} Dec 01 09:52:53 crc kubenswrapper[4933]: I1201 09:52:53.481118 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"694ae277-c7eb-4d3c-8125-b5fb7fa45b71","Type":"ContainerDied","Data":"68a0edf398f24c91dfb2d4c7654df6ef970fb66c7eab8b6a360a095b264f4dc1"} Dec 01 09:52:53 crc kubenswrapper[4933]: I1201 09:52:53.590879 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.160:3000/\": dial tcp 10.217.0.160:3000: connect: connection refused" Dec 01 09:52:54 crc kubenswrapper[4933]: I1201 09:52:54.118762 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 01 09:52:56 crc kubenswrapper[4933]: I1201 09:52:56.519360 4933 generic.go:334] "Generic (PLEG): container finished" podID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerID="e755e38f445fdefd32cf04e17672fa97d04c6ce71929ad05233e802dcb8b2def" exitCode=0 Dec 01 09:52:56 crc kubenswrapper[4933]: I1201 09:52:56.519450 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"694ae277-c7eb-4d3c-8125-b5fb7fa45b71","Type":"ContainerDied","Data":"e755e38f445fdefd32cf04e17672fa97d04c6ce71929ad05233e802dcb8b2def"} Dec 01 09:52:59 crc kubenswrapper[4933]: I1201 09:52:59.061385 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6775f97bdb-vs7m8" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 01 09:52:59 crc kubenswrapper[4933]: I1201 09:52:59.136400 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75479c6864-2fvz5" podUID="000656f6-99fd-43a3-8ade-31b200d0c18a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Dec 01 09:52:59 crc kubenswrapper[4933]: I1201 09:52:59.755171 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:52:59 crc kubenswrapper[4933]: I1201 09:52:59.759170 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5c69467867-495s4" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.026323 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.183973 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-run-httpd\") pod \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.184104 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-combined-ca-bundle\") pod \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.184523 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "694ae277-c7eb-4d3c-8125-b5fb7fa45b71" (UID: "694ae277-c7eb-4d3c-8125-b5fb7fa45b71"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.185193 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw297\" (UniqueName: \"kubernetes.io/projected/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-kube-api-access-fw297\") pod \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.185416 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-log-httpd\") pod \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.185622 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-scripts\") pod \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.185831 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-sg-core-conf-yaml\") pod \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.185960 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-config-data\") pod \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\" (UID: \"694ae277-c7eb-4d3c-8125-b5fb7fa45b71\") " Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.185984 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "694ae277-c7eb-4d3c-8125-b5fb7fa45b71" (UID: "694ae277-c7eb-4d3c-8125-b5fb7fa45b71"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.187147 4933 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.187270 4933 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.193701 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-scripts" (OuterVolumeSpecName: "scripts") pod "694ae277-c7eb-4d3c-8125-b5fb7fa45b71" (UID: "694ae277-c7eb-4d3c-8125-b5fb7fa45b71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.193969 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-kube-api-access-fw297" (OuterVolumeSpecName: "kube-api-access-fw297") pod "694ae277-c7eb-4d3c-8125-b5fb7fa45b71" (UID: "694ae277-c7eb-4d3c-8125-b5fb7fa45b71"). InnerVolumeSpecName "kube-api-access-fw297". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.221859 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "694ae277-c7eb-4d3c-8125-b5fb7fa45b71" (UID: "694ae277-c7eb-4d3c-8125-b5fb7fa45b71"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.280853 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "694ae277-c7eb-4d3c-8125-b5fb7fa45b71" (UID: "694ae277-c7eb-4d3c-8125-b5fb7fa45b71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.289962 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.290038 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw297\" (UniqueName: \"kubernetes.io/projected/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-kube-api-access-fw297\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.290053 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.290063 4933 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.307805 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-config-data" (OuterVolumeSpecName: "config-data") pod "694ae277-c7eb-4d3c-8125-b5fb7fa45b71" (UID: "694ae277-c7eb-4d3c-8125-b5fb7fa45b71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.393068 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694ae277-c7eb-4d3c-8125-b5fb7fa45b71-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.605906 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7ad829f6-4a62-4ed2-a99c-30aed564d585","Type":"ContainerStarted","Data":"909034f10497dff4874bc478843a4e1c9d76df7b971820499c09dd1725cb9ee6"} Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.611030 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.610877 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"694ae277-c7eb-4d3c-8125-b5fb7fa45b71","Type":"ContainerDied","Data":"216c0689e2064419bf5f0d5e413dd3b7a55d1be63630efa549fc71cc11c2557d"} Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.612223 4933 scope.go:117] "RemoveContainer" containerID="548e12572203ec08f31aeb72e7bf6312905452a77e95e1cb11d34c7b5a8c9661" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.690447 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.758263851 podStartE2EDuration="18.69041282s" podCreationTimestamp="2025-12-01 09:52:45 +0000 UTC" firstStartedPulling="2025-12-01 09:52:46.755481905 +0000 UTC m=+1257.397205520" lastFinishedPulling="2025-12-01 09:53:02.687630874 +0000 UTC m=+1273.329354489" observedRunningTime="2025-12-01 09:53:03.635923016 +0000 UTC m=+1274.277646641" watchObservedRunningTime="2025-12-01 09:53:03.69041282 +0000 UTC m=+1274.332136435" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.700222 4933 scope.go:117] "RemoveContainer" containerID="b0292fc49a23ce180dfbd438fac50a747201f9f1dc65e2c6deb7e0e71d0d6ce5" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.704161 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.715349 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.730195 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:03 crc kubenswrapper[4933]: E1201 09:53:03.730761 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="proxy-httpd" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.730782 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="proxy-httpd" Dec 01 09:53:03 crc kubenswrapper[4933]: E1201 09:53:03.730799 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="ceilometer-notification-agent" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.730808 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="ceilometer-notification-agent" Dec 01 09:53:03 crc kubenswrapper[4933]: E1201 09:53:03.730832 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="ceilometer-central-agent" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.730839 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="ceilometer-central-agent" Dec 01 09:53:03 crc kubenswrapper[4933]: E1201 09:53:03.730847 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="sg-core" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.730853 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="sg-core" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.731049 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="sg-core" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.731081 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="proxy-httpd" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.731101 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="ceilometer-notification-agent" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.731118 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" containerName="ceilometer-central-agent" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.734556 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.737810 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.738720 4933 scope.go:117] "RemoveContainer" containerID="e755e38f445fdefd32cf04e17672fa97d04c6ce71929ad05233e802dcb8b2def" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.741500 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.799442 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.828483 4933 scope.go:117] "RemoveContainer" containerID="68a0edf398f24c91dfb2d4c7654df6ef970fb66c7eab8b6a360a095b264f4dc1" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.904798 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.904880 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-scripts\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.904912 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.905294 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfc448c5-585f-43f9-bb98-c592a65c6fb9-log-httpd\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.905383 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6nvk\" (UniqueName: \"kubernetes.io/projected/dfc448c5-585f-43f9-bb98-c592a65c6fb9-kube-api-access-n6nvk\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.905544 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-config-data\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:03 crc kubenswrapper[4933]: I1201 09:53:03.905627 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfc448c5-585f-43f9-bb98-c592a65c6fb9-run-httpd\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.008060 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-scripts\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.008136 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.008202 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfc448c5-585f-43f9-bb98-c592a65c6fb9-log-httpd\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.008226 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6nvk\" (UniqueName: \"kubernetes.io/projected/dfc448c5-585f-43f9-bb98-c592a65c6fb9-kube-api-access-n6nvk\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.008280 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-config-data\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.008340 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfc448c5-585f-43f9-bb98-c592a65c6fb9-run-httpd\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.008524 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.009281 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfc448c5-585f-43f9-bb98-c592a65c6fb9-run-httpd\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.009630 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfc448c5-585f-43f9-bb98-c592a65c6fb9-log-httpd\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.015200 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-scripts\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.016397 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-config-data\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.017011 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.025987 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.030276 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6nvk\" (UniqueName: \"kubernetes.io/projected/dfc448c5-585f-43f9-bb98-c592a65c6fb9-kube-api-access-n6nvk\") pod \"ceilometer-0\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.056723 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.583744 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:04 crc kubenswrapper[4933]: W1201 09:53:04.591938 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfc448c5_585f_43f9_bb98_c592a65c6fb9.slice/crio-5e44ed1e4736735f91472214f31d85536dee97312896d733a1d7c7629151c993 WatchSource:0}: Error finding container 5e44ed1e4736735f91472214f31d85536dee97312896d733a1d7c7629151c993: Status 404 returned error can't find the container with id 5e44ed1e4736735f91472214f31d85536dee97312896d733a1d7c7629151c993 Dec 01 09:53:04 crc kubenswrapper[4933]: I1201 09:53:04.626975 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfc448c5-585f-43f9-bb98-c592a65c6fb9","Type":"ContainerStarted","Data":"5e44ed1e4736735f91472214f31d85536dee97312896d733a1d7c7629151c993"} Dec 01 09:53:05 crc kubenswrapper[4933]: I1201 09:53:05.740293 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="694ae277-c7eb-4d3c-8125-b5fb7fa45b71" path="/var/lib/kubelet/pods/694ae277-c7eb-4d3c-8125-b5fb7fa45b71/volumes" Dec 01 09:53:06 crc kubenswrapper[4933]: I1201 09:53:06.651410 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfc448c5-585f-43f9-bb98-c592a65c6fb9","Type":"ContainerStarted","Data":"f34e8c843fcb29b1f86069f3ea981d4476178bb90f65505b44a78fd496034d51"} Dec 01 09:53:08 crc kubenswrapper[4933]: I1201 09:53:08.778338 4933 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 09:53:09 crc kubenswrapper[4933]: I1201 09:53:09.720671 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfc448c5-585f-43f9-bb98-c592a65c6fb9","Type":"ContainerStarted","Data":"d6fba986fe285c68996afb5d8af1a1d64036d504600fb4117ed5ef32a394f36a"} Dec 01 09:53:09 crc kubenswrapper[4933]: I1201 09:53:09.721536 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfc448c5-585f-43f9-bb98-c592a65c6fb9","Type":"ContainerStarted","Data":"3efef0ca223e351f022ab3ebaabb31644ea39a541cefeec945b3d1d590c9bc7b"} Dec 01 09:53:11 crc kubenswrapper[4933]: I1201 09:53:11.256512 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:11 crc kubenswrapper[4933]: I1201 09:53:11.748993 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfc448c5-585f-43f9-bb98-c592a65c6fb9","Type":"ContainerStarted","Data":"1e10ea1cbae57f218b013025b5b8b3ca0a005ebc695fd282461fafd7b60ef6f8"} Dec 01 09:53:11 crc kubenswrapper[4933]: I1201 09:53:11.749381 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 09:53:11 crc kubenswrapper[4933]: I1201 09:53:11.788792 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.898745988 podStartE2EDuration="8.788758208s" podCreationTimestamp="2025-12-01 09:53:03 +0000 UTC" firstStartedPulling="2025-12-01 09:53:04.595545735 +0000 UTC m=+1275.237269350" lastFinishedPulling="2025-12-01 09:53:10.485557945 +0000 UTC m=+1281.127281570" observedRunningTime="2025-12-01 09:53:11.782614927 +0000 UTC m=+1282.424338552" watchObservedRunningTime="2025-12-01 09:53:11.788758208 +0000 UTC m=+1282.430481833" Dec 01 09:53:11 crc kubenswrapper[4933]: I1201 09:53:11.926501 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:53:11 crc kubenswrapper[4933]: I1201 09:53:11.950336 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:53:12 crc kubenswrapper[4933]: I1201 09:53:12.759944 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="ceilometer-central-agent" containerID="cri-o://f34e8c843fcb29b1f86069f3ea981d4476178bb90f65505b44a78fd496034d51" gracePeriod=30 Dec 01 09:53:12 crc kubenswrapper[4933]: I1201 09:53:12.760123 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="ceilometer-notification-agent" containerID="cri-o://3efef0ca223e351f022ab3ebaabb31644ea39a541cefeec945b3d1d590c9bc7b" gracePeriod=30 Dec 01 09:53:12 crc kubenswrapper[4933]: I1201 09:53:12.760105 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="sg-core" containerID="cri-o://d6fba986fe285c68996afb5d8af1a1d64036d504600fb4117ed5ef32a394f36a" gracePeriod=30 Dec 01 09:53:12 crc kubenswrapper[4933]: I1201 09:53:12.762578 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="proxy-httpd" containerID="cri-o://1e10ea1cbae57f218b013025b5b8b3ca0a005ebc695fd282461fafd7b60ef6f8" gracePeriod=30 Dec 01 09:53:13 crc kubenswrapper[4933]: I1201 09:53:13.773268 4933 generic.go:334] "Generic (PLEG): container finished" podID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerID="1e10ea1cbae57f218b013025b5b8b3ca0a005ebc695fd282461fafd7b60ef6f8" exitCode=0 Dec 01 09:53:13 crc kubenswrapper[4933]: I1201 09:53:13.774317 4933 generic.go:334] "Generic (PLEG): container finished" podID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerID="d6fba986fe285c68996afb5d8af1a1d64036d504600fb4117ed5ef32a394f36a" exitCode=2 Dec 01 09:53:13 crc kubenswrapper[4933]: I1201 09:53:13.774335 4933 generic.go:334] "Generic (PLEG): container finished" podID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerID="3efef0ca223e351f022ab3ebaabb31644ea39a541cefeec945b3d1d590c9bc7b" exitCode=0 Dec 01 09:53:13 crc kubenswrapper[4933]: I1201 09:53:13.773548 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfc448c5-585f-43f9-bb98-c592a65c6fb9","Type":"ContainerDied","Data":"1e10ea1cbae57f218b013025b5b8b3ca0a005ebc695fd282461fafd7b60ef6f8"} Dec 01 09:53:13 crc kubenswrapper[4933]: I1201 09:53:13.774392 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfc448c5-585f-43f9-bb98-c592a65c6fb9","Type":"ContainerDied","Data":"d6fba986fe285c68996afb5d8af1a1d64036d504600fb4117ed5ef32a394f36a"} Dec 01 09:53:13 crc kubenswrapper[4933]: I1201 09:53:13.774412 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfc448c5-585f-43f9-bb98-c592a65c6fb9","Type":"ContainerDied","Data":"3efef0ca223e351f022ab3ebaabb31644ea39a541cefeec945b3d1d590c9bc7b"} Dec 01 09:53:14 crc kubenswrapper[4933]: I1201 09:53:14.011549 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-75479c6864-2fvz5" Dec 01 09:53:14 crc kubenswrapper[4933]: I1201 09:53:14.108930 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6775f97bdb-vs7m8"] Dec 01 09:53:14 crc kubenswrapper[4933]: I1201 09:53:14.109801 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6775f97bdb-vs7m8" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon-log" containerID="cri-o://e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0" gracePeriod=30 Dec 01 09:53:14 crc kubenswrapper[4933]: I1201 09:53:14.110397 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6775f97bdb-vs7m8" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" containerID="cri-o://4939aef99ded42615cef4e9d95fccaaae2f1009f74245945b7589fb343808301" gracePeriod=30 Dec 01 09:53:14 crc kubenswrapper[4933]: I1201 09:53:14.157290 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6775f97bdb-vs7m8" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.506914 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-525pg"] Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.508518 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-525pg" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.516912 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5jlj\" (UniqueName: \"kubernetes.io/projected/a6d99846-3d21-461d-91c9-b4f52973fd73-kube-api-access-d5jlj\") pod \"nova-api-db-create-525pg\" (UID: \"a6d99846-3d21-461d-91c9-b4f52973fd73\") " pod="openstack/nova-api-db-create-525pg" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.517233 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6d99846-3d21-461d-91c9-b4f52973fd73-operator-scripts\") pod \"nova-api-db-create-525pg\" (UID: \"a6d99846-3d21-461d-91c9-b4f52973fd73\") " pod="openstack/nova-api-db-create-525pg" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.518119 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-525pg"] Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.559434 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6775f97bdb-vs7m8" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:54334->10.217.0.146:8443: read: connection reset by peer" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.619394 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5jlj\" (UniqueName: \"kubernetes.io/projected/a6d99846-3d21-461d-91c9-b4f52973fd73-kube-api-access-d5jlj\") pod \"nova-api-db-create-525pg\" (UID: \"a6d99846-3d21-461d-91c9-b4f52973fd73\") " pod="openstack/nova-api-db-create-525pg" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.619487 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6d99846-3d21-461d-91c9-b4f52973fd73-operator-scripts\") pod \"nova-api-db-create-525pg\" (UID: \"a6d99846-3d21-461d-91c9-b4f52973fd73\") " pod="openstack/nova-api-db-create-525pg" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.620720 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6d99846-3d21-461d-91c9-b4f52973fd73-operator-scripts\") pod \"nova-api-db-create-525pg\" (UID: \"a6d99846-3d21-461d-91c9-b4f52973fd73\") " pod="openstack/nova-api-db-create-525pg" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.629669 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-69ffd"] Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.633895 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-69ffd" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.648037 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5jlj\" (UniqueName: \"kubernetes.io/projected/a6d99846-3d21-461d-91c9-b4f52973fd73-kube-api-access-d5jlj\") pod \"nova-api-db-create-525pg\" (UID: \"a6d99846-3d21-461d-91c9-b4f52973fd73\") " pod="openstack/nova-api-db-create-525pg" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.656366 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-69ffd"] Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.724372 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mm4n\" (UniqueName: \"kubernetes.io/projected/c9e4aba6-b83f-42c8-b38c-f5293f898400-kube-api-access-7mm4n\") pod \"nova-cell0-db-create-69ffd\" (UID: \"c9e4aba6-b83f-42c8-b38c-f5293f898400\") " pod="openstack/nova-cell0-db-create-69ffd" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.724562 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e4aba6-b83f-42c8-b38c-f5293f898400-operator-scripts\") pod \"nova-cell0-db-create-69ffd\" (UID: \"c9e4aba6-b83f-42c8-b38c-f5293f898400\") " pod="openstack/nova-cell0-db-create-69ffd" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.724983 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-47e6-account-create-update-mtm6v"] Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.729050 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47e6-account-create-update-mtm6v" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.737652 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.749998 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-47e6-account-create-update-mtm6v"] Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.805199 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-m6pgw"] Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.806729 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m6pgw" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.818373 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m6pgw"] Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.829051 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e4aba6-b83f-42c8-b38c-f5293f898400-operator-scripts\") pod \"nova-cell0-db-create-69ffd\" (UID: \"c9e4aba6-b83f-42c8-b38c-f5293f898400\") " pod="openstack/nova-cell0-db-create-69ffd" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.829343 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mm4n\" (UniqueName: \"kubernetes.io/projected/c9e4aba6-b83f-42c8-b38c-f5293f898400-kube-api-access-7mm4n\") pod \"nova-cell0-db-create-69ffd\" (UID: \"c9e4aba6-b83f-42c8-b38c-f5293f898400\") " pod="openstack/nova-cell0-db-create-69ffd" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.833919 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e4aba6-b83f-42c8-b38c-f5293f898400-operator-scripts\") pod \"nova-cell0-db-create-69ffd\" (UID: \"c9e4aba6-b83f-42c8-b38c-f5293f898400\") " pod="openstack/nova-cell0-db-create-69ffd" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.835931 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-525pg" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.836052 4933 generic.go:334] "Generic (PLEG): container finished" podID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerID="4939aef99ded42615cef4e9d95fccaaae2f1009f74245945b7589fb343808301" exitCode=0 Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.836122 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6775f97bdb-vs7m8" event={"ID":"6ffcfb41-8086-4e28-b88a-da47dd38a844","Type":"ContainerDied","Data":"4939aef99ded42615cef4e9d95fccaaae2f1009f74245945b7589fb343808301"} Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.836322 4933 scope.go:117] "RemoveContainer" containerID="0874941382f63e0c82cae2a5dc94543aa96b15309c19c42f17c114109f943016" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.851268 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mm4n\" (UniqueName: \"kubernetes.io/projected/c9e4aba6-b83f-42c8-b38c-f5293f898400-kube-api-access-7mm4n\") pod \"nova-cell0-db-create-69ffd\" (UID: \"c9e4aba6-b83f-42c8-b38c-f5293f898400\") " pod="openstack/nova-cell0-db-create-69ffd" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.925367 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-792b-account-create-update-ddcxv"] Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.927130 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-792b-account-create-update-ddcxv" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.932002 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36ad6a62-9532-4080-bfad-4f464fa988b0-operator-scripts\") pod \"nova-api-47e6-account-create-update-mtm6v\" (UID: \"36ad6a62-9532-4080-bfad-4f464fa988b0\") " pod="openstack/nova-api-47e6-account-create-update-mtm6v" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.932208 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c15666fa-6b57-4765-98ce-4ffc163e1d49-operator-scripts\") pod \"nova-cell1-db-create-m6pgw\" (UID: \"c15666fa-6b57-4765-98ce-4ffc163e1d49\") " pod="openstack/nova-cell1-db-create-m6pgw" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.932336 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2smkl\" (UniqueName: \"kubernetes.io/projected/c15666fa-6b57-4765-98ce-4ffc163e1d49-kube-api-access-2smkl\") pod \"nova-cell1-db-create-m6pgw\" (UID: \"c15666fa-6b57-4765-98ce-4ffc163e1d49\") " pod="openstack/nova-cell1-db-create-m6pgw" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.932497 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4nnm\" (UniqueName: \"kubernetes.io/projected/36ad6a62-9532-4080-bfad-4f464fa988b0-kube-api-access-k4nnm\") pod \"nova-api-47e6-account-create-update-mtm6v\" (UID: \"36ad6a62-9532-4080-bfad-4f464fa988b0\") " pod="openstack/nova-api-47e6-account-create-update-mtm6v" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.937933 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 01 09:53:17 crc kubenswrapper[4933]: I1201 09:53:17.946111 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-792b-account-create-update-ddcxv"] Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.035789 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4nnm\" (UniqueName: \"kubernetes.io/projected/36ad6a62-9532-4080-bfad-4f464fa988b0-kube-api-access-k4nnm\") pod \"nova-api-47e6-account-create-update-mtm6v\" (UID: \"36ad6a62-9532-4080-bfad-4f464fa988b0\") " pod="openstack/nova-api-47e6-account-create-update-mtm6v" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.036274 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pdlj\" (UniqueName: \"kubernetes.io/projected/484151e2-8ea8-4bfa-8f7a-77c86fad9df0-kube-api-access-9pdlj\") pod \"nova-cell0-792b-account-create-update-ddcxv\" (UID: \"484151e2-8ea8-4bfa-8f7a-77c86fad9df0\") " pod="openstack/nova-cell0-792b-account-create-update-ddcxv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.036466 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36ad6a62-9532-4080-bfad-4f464fa988b0-operator-scripts\") pod \"nova-api-47e6-account-create-update-mtm6v\" (UID: \"36ad6a62-9532-4080-bfad-4f464fa988b0\") " pod="openstack/nova-api-47e6-account-create-update-mtm6v" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.036513 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c15666fa-6b57-4765-98ce-4ffc163e1d49-operator-scripts\") pod \"nova-cell1-db-create-m6pgw\" (UID: \"c15666fa-6b57-4765-98ce-4ffc163e1d49\") " pod="openstack/nova-cell1-db-create-m6pgw" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.036536 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/484151e2-8ea8-4bfa-8f7a-77c86fad9df0-operator-scripts\") pod \"nova-cell0-792b-account-create-update-ddcxv\" (UID: \"484151e2-8ea8-4bfa-8f7a-77c86fad9df0\") " pod="openstack/nova-cell0-792b-account-create-update-ddcxv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.036583 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2smkl\" (UniqueName: \"kubernetes.io/projected/c15666fa-6b57-4765-98ce-4ffc163e1d49-kube-api-access-2smkl\") pod \"nova-cell1-db-create-m6pgw\" (UID: \"c15666fa-6b57-4765-98ce-4ffc163e1d49\") " pod="openstack/nova-cell1-db-create-m6pgw" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.039278 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36ad6a62-9532-4080-bfad-4f464fa988b0-operator-scripts\") pod \"nova-api-47e6-account-create-update-mtm6v\" (UID: \"36ad6a62-9532-4080-bfad-4f464fa988b0\") " pod="openstack/nova-api-47e6-account-create-update-mtm6v" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.040514 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c15666fa-6b57-4765-98ce-4ffc163e1d49-operator-scripts\") pod \"nova-cell1-db-create-m6pgw\" (UID: \"c15666fa-6b57-4765-98ce-4ffc163e1d49\") " pod="openstack/nova-cell1-db-create-m6pgw" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.051996 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-69ffd" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.060371 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2smkl\" (UniqueName: \"kubernetes.io/projected/c15666fa-6b57-4765-98ce-4ffc163e1d49-kube-api-access-2smkl\") pod \"nova-cell1-db-create-m6pgw\" (UID: \"c15666fa-6b57-4765-98ce-4ffc163e1d49\") " pod="openstack/nova-cell1-db-create-m6pgw" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.060478 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4nnm\" (UniqueName: \"kubernetes.io/projected/36ad6a62-9532-4080-bfad-4f464fa988b0-kube-api-access-k4nnm\") pod \"nova-api-47e6-account-create-update-mtm6v\" (UID: \"36ad6a62-9532-4080-bfad-4f464fa988b0\") " pod="openstack/nova-api-47e6-account-create-update-mtm6v" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.063757 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47e6-account-create-update-mtm6v" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.129412 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-157a-account-create-update-bfxmv"] Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.131590 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-157a-account-create-update-bfxmv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.132138 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m6pgw" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.136136 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.139666 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pdlj\" (UniqueName: \"kubernetes.io/projected/484151e2-8ea8-4bfa-8f7a-77c86fad9df0-kube-api-access-9pdlj\") pod \"nova-cell0-792b-account-create-update-ddcxv\" (UID: \"484151e2-8ea8-4bfa-8f7a-77c86fad9df0\") " pod="openstack/nova-cell0-792b-account-create-update-ddcxv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.139758 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/484151e2-8ea8-4bfa-8f7a-77c86fad9df0-operator-scripts\") pod \"nova-cell0-792b-account-create-update-ddcxv\" (UID: \"484151e2-8ea8-4bfa-8f7a-77c86fad9df0\") " pod="openstack/nova-cell0-792b-account-create-update-ddcxv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.140761 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/484151e2-8ea8-4bfa-8f7a-77c86fad9df0-operator-scripts\") pod \"nova-cell0-792b-account-create-update-ddcxv\" (UID: \"484151e2-8ea8-4bfa-8f7a-77c86fad9df0\") " pod="openstack/nova-cell0-792b-account-create-update-ddcxv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.141806 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-157a-account-create-update-bfxmv"] Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.172320 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pdlj\" (UniqueName: \"kubernetes.io/projected/484151e2-8ea8-4bfa-8f7a-77c86fad9df0-kube-api-access-9pdlj\") pod \"nova-cell0-792b-account-create-update-ddcxv\" (UID: \"484151e2-8ea8-4bfa-8f7a-77c86fad9df0\") " pod="openstack/nova-cell0-792b-account-create-update-ddcxv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.242113 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xdkd\" (UniqueName: \"kubernetes.io/projected/4d1e236a-dbe4-415d-bae9-945971a11083-kube-api-access-4xdkd\") pod \"nova-cell1-157a-account-create-update-bfxmv\" (UID: \"4d1e236a-dbe4-415d-bae9-945971a11083\") " pod="openstack/nova-cell1-157a-account-create-update-bfxmv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.242385 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e236a-dbe4-415d-bae9-945971a11083-operator-scripts\") pod \"nova-cell1-157a-account-create-update-bfxmv\" (UID: \"4d1e236a-dbe4-415d-bae9-945971a11083\") " pod="openstack/nova-cell1-157a-account-create-update-bfxmv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.297184 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-792b-account-create-update-ddcxv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.343853 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e236a-dbe4-415d-bae9-945971a11083-operator-scripts\") pod \"nova-cell1-157a-account-create-update-bfxmv\" (UID: \"4d1e236a-dbe4-415d-bae9-945971a11083\") " pod="openstack/nova-cell1-157a-account-create-update-bfxmv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.343991 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xdkd\" (UniqueName: \"kubernetes.io/projected/4d1e236a-dbe4-415d-bae9-945971a11083-kube-api-access-4xdkd\") pod \"nova-cell1-157a-account-create-update-bfxmv\" (UID: \"4d1e236a-dbe4-415d-bae9-945971a11083\") " pod="openstack/nova-cell1-157a-account-create-update-bfxmv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.345132 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e236a-dbe4-415d-bae9-945971a11083-operator-scripts\") pod \"nova-cell1-157a-account-create-update-bfxmv\" (UID: \"4d1e236a-dbe4-415d-bae9-945971a11083\") " pod="openstack/nova-cell1-157a-account-create-update-bfxmv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.375851 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xdkd\" (UniqueName: \"kubernetes.io/projected/4d1e236a-dbe4-415d-bae9-945971a11083-kube-api-access-4xdkd\") pod \"nova-cell1-157a-account-create-update-bfxmv\" (UID: \"4d1e236a-dbe4-415d-bae9-945971a11083\") " pod="openstack/nova-cell1-157a-account-create-update-bfxmv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.455860 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-525pg"] Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.471662 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-157a-account-create-update-bfxmv" Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.826361 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-47e6-account-create-update-mtm6v"] Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.875044 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-525pg" event={"ID":"a6d99846-3d21-461d-91c9-b4f52973fd73","Type":"ContainerStarted","Data":"7fb3fbec3a0a451dccd2b5e0e9df577a2c682470bee6f0f53e30b3759ad23e19"} Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.889266 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-47e6-account-create-update-mtm6v" event={"ID":"36ad6a62-9532-4080-bfad-4f464fa988b0","Type":"ContainerStarted","Data":"02c552eb73d67b9cd59d3a8e1919cc9277ecc649830a39988fb966f36832cff4"} Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.963859 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m6pgw"] Dec 01 09:53:18 crc kubenswrapper[4933]: I1201 09:53:18.992399 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-792b-account-create-update-ddcxv"] Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.007011 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-69ffd"] Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.064115 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6775f97bdb-vs7m8" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.154844 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-157a-account-create-update-bfxmv"] Dec 01 09:53:19 crc kubenswrapper[4933]: W1201 09:53:19.166686 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d1e236a_dbe4_415d_bae9_945971a11083.slice/crio-5218af4833fd717b5c29224d4921305aee4363687d2f38f61075f213728bcdf4 WatchSource:0}: Error finding container 5218af4833fd717b5c29224d4921305aee4363687d2f38f61075f213728bcdf4: Status 404 returned error can't find the container with id 5218af4833fd717b5c29224d4921305aee4363687d2f38f61075f213728bcdf4 Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.948470 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-792b-account-create-update-ddcxv" event={"ID":"484151e2-8ea8-4bfa-8f7a-77c86fad9df0","Type":"ContainerStarted","Data":"26e49ca2e47060e6a843f15c74d04322e5caec39ae4e0416a9e352978bb57216"} Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.950017 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-792b-account-create-update-ddcxv" event={"ID":"484151e2-8ea8-4bfa-8f7a-77c86fad9df0","Type":"ContainerStarted","Data":"7a6e34a4fcac539a9efd13ff5fbe88273ace28c970af2292897ed7ffff6ea8c0"} Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.957351 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.957689 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="02e5b782-8d28-4206-aeb2-a9f1976abc8f" containerName="glance-log" containerID="cri-o://0dc2aa5e473292afd7f608803c2e5b66794ae41a16b9b9b1e26565bd52e3ab1b" gracePeriod=30 Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.957919 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="02e5b782-8d28-4206-aeb2-a9f1976abc8f" containerName="glance-httpd" containerID="cri-o://71af3c1e997347f5888993bda334f7ecd26ddb62e1e0925557a88be906a17ba3" gracePeriod=30 Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.982171 4933 generic.go:334] "Generic (PLEG): container finished" podID="a6d99846-3d21-461d-91c9-b4f52973fd73" containerID="08daec3cb872733483f3158656edc725ee1951b66faf08eea56fbe70ca813a2a" exitCode=0 Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.982736 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-525pg" event={"ID":"a6d99846-3d21-461d-91c9-b4f52973fd73","Type":"ContainerDied","Data":"08daec3cb872733483f3158656edc725ee1951b66faf08eea56fbe70ca813a2a"} Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.985507 4933 generic.go:334] "Generic (PLEG): container finished" podID="c15666fa-6b57-4765-98ce-4ffc163e1d49" containerID="fc451e4f15370c67ad3aa2b8e098c69dc8a5400deda51da59159228ecf92be75" exitCode=0 Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.985669 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m6pgw" event={"ID":"c15666fa-6b57-4765-98ce-4ffc163e1d49","Type":"ContainerDied","Data":"fc451e4f15370c67ad3aa2b8e098c69dc8a5400deda51da59159228ecf92be75"} Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.985710 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m6pgw" event={"ID":"c15666fa-6b57-4765-98ce-4ffc163e1d49","Type":"ContainerStarted","Data":"c3014ce18b66d1d14d3f5f6aeaf9326941526587b6c6b7a8e2032ece4c2cbadf"} Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.987255 4933 generic.go:334] "Generic (PLEG): container finished" podID="c9e4aba6-b83f-42c8-b38c-f5293f898400" containerID="1ddb88de233bc10b0f440f0025d133358f5f80d7e4ffbe5cdbf6c586da45be96" exitCode=0 Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.987347 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-69ffd" event={"ID":"c9e4aba6-b83f-42c8-b38c-f5293f898400","Type":"ContainerDied","Data":"1ddb88de233bc10b0f440f0025d133358f5f80d7e4ffbe5cdbf6c586da45be96"} Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.987369 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-69ffd" event={"ID":"c9e4aba6-b83f-42c8-b38c-f5293f898400","Type":"ContainerStarted","Data":"e030a62a2362a5ed4dc024cee29e0173865bce5372b33231538067bbe160148a"} Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.988945 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-157a-account-create-update-bfxmv" event={"ID":"4d1e236a-dbe4-415d-bae9-945971a11083","Type":"ContainerStarted","Data":"9059b2f0e9061395c82a19a92f43e573c7f53f5e3a24a267845c80c70f86fe6b"} Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.988989 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-157a-account-create-update-bfxmv" event={"ID":"4d1e236a-dbe4-415d-bae9-945971a11083","Type":"ContainerStarted","Data":"5218af4833fd717b5c29224d4921305aee4363687d2f38f61075f213728bcdf4"} Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.990829 4933 generic.go:334] "Generic (PLEG): container finished" podID="36ad6a62-9532-4080-bfad-4f464fa988b0" containerID="a3131557785e450e30c2fd7bf5e0f491318167e20a23783852cf303a73a03e4d" exitCode=0 Dec 01 09:53:19 crc kubenswrapper[4933]: I1201 09:53:19.990922 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-47e6-account-create-update-mtm6v" event={"ID":"36ad6a62-9532-4080-bfad-4f464fa988b0","Type":"ContainerDied","Data":"a3131557785e450e30c2fd7bf5e0f491318167e20a23783852cf303a73a03e4d"} Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.010506 4933 generic.go:334] "Generic (PLEG): container finished" podID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerID="f34e8c843fcb29b1f86069f3ea981d4476178bb90f65505b44a78fd496034d51" exitCode=0 Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.010602 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfc448c5-585f-43f9-bb98-c592a65c6fb9","Type":"ContainerDied","Data":"f34e8c843fcb29b1f86069f3ea981d4476178bb90f65505b44a78fd496034d51"} Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.110020 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-157a-account-create-update-bfxmv" podStartSLOduration=2.110003725 podStartE2EDuration="2.110003725s" podCreationTimestamp="2025-12-01 09:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:53:20.098136224 +0000 UTC m=+1290.739859839" watchObservedRunningTime="2025-12-01 09:53:20.110003725 +0000 UTC m=+1290.751727330" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.487672 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.635218 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfc448c5-585f-43f9-bb98-c592a65c6fb9-log-httpd\") pod \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.635362 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-sg-core-conf-yaml\") pod \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.635436 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-scripts\") pod \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.635534 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfc448c5-585f-43f9-bb98-c592a65c6fb9-run-httpd\") pod \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.635571 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-config-data\") pod \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.635783 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-combined-ca-bundle\") pod \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.635818 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6nvk\" (UniqueName: \"kubernetes.io/projected/dfc448c5-585f-43f9-bb98-c592a65c6fb9-kube-api-access-n6nvk\") pod \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\" (UID: \"dfc448c5-585f-43f9-bb98-c592a65c6fb9\") " Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.635950 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfc448c5-585f-43f9-bb98-c592a65c6fb9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dfc448c5-585f-43f9-bb98-c592a65c6fb9" (UID: "dfc448c5-585f-43f9-bb98-c592a65c6fb9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.636415 4933 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfc448c5-585f-43f9-bb98-c592a65c6fb9-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.636391 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfc448c5-585f-43f9-bb98-c592a65c6fb9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dfc448c5-585f-43f9-bb98-c592a65c6fb9" (UID: "dfc448c5-585f-43f9-bb98-c592a65c6fb9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.644554 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-scripts" (OuterVolumeSpecName: "scripts") pod "dfc448c5-585f-43f9-bb98-c592a65c6fb9" (UID: "dfc448c5-585f-43f9-bb98-c592a65c6fb9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.644563 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc448c5-585f-43f9-bb98-c592a65c6fb9-kube-api-access-n6nvk" (OuterVolumeSpecName: "kube-api-access-n6nvk") pod "dfc448c5-585f-43f9-bb98-c592a65c6fb9" (UID: "dfc448c5-585f-43f9-bb98-c592a65c6fb9"). InnerVolumeSpecName "kube-api-access-n6nvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.676448 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dfc448c5-585f-43f9-bb98-c592a65c6fb9" (UID: "dfc448c5-585f-43f9-bb98-c592a65c6fb9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.738722 4933 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.738762 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.738775 4933 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfc448c5-585f-43f9-bb98-c592a65c6fb9-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.738788 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6nvk\" (UniqueName: \"kubernetes.io/projected/dfc448c5-585f-43f9-bb98-c592a65c6fb9-kube-api-access-n6nvk\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.752362 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfc448c5-585f-43f9-bb98-c592a65c6fb9" (UID: "dfc448c5-585f-43f9-bb98-c592a65c6fb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.785453 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-config-data" (OuterVolumeSpecName: "config-data") pod "dfc448c5-585f-43f9-bb98-c592a65c6fb9" (UID: "dfc448c5-585f-43f9-bb98-c592a65c6fb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.840654 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.840693 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc448c5-585f-43f9-bb98-c592a65c6fb9-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.891553 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.891837 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2e5be60e-fe89-43a0-b914-afbf646a6888" containerName="glance-log" containerID="cri-o://7fbdefa9b331b64acda196f42d688581c0f552728712316aaca99f6f0ff1d6e8" gracePeriod=30 Dec 01 09:53:20 crc kubenswrapper[4933]: I1201 09:53:20.893348 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2e5be60e-fe89-43a0-b914-afbf646a6888" containerName="glance-httpd" containerID="cri-o://5c522b75ceeeaee2e8506acc018bdcf23120b042987be2fe9a13e94298e082a4" gracePeriod=30 Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.024927 4933 generic.go:334] "Generic (PLEG): container finished" podID="4d1e236a-dbe4-415d-bae9-945971a11083" containerID="9059b2f0e9061395c82a19a92f43e573c7f53f5e3a24a267845c80c70f86fe6b" exitCode=0 Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.025463 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-157a-account-create-update-bfxmv" event={"ID":"4d1e236a-dbe4-415d-bae9-945971a11083","Type":"ContainerDied","Data":"9059b2f0e9061395c82a19a92f43e573c7f53f5e3a24a267845c80c70f86fe6b"} Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.030923 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfc448c5-585f-43f9-bb98-c592a65c6fb9","Type":"ContainerDied","Data":"5e44ed1e4736735f91472214f31d85536dee97312896d733a1d7c7629151c993"} Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.030984 4933 scope.go:117] "RemoveContainer" containerID="1e10ea1cbae57f218b013025b5b8b3ca0a005ebc695fd282461fafd7b60ef6f8" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.031124 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.049951 4933 generic.go:334] "Generic (PLEG): container finished" podID="484151e2-8ea8-4bfa-8f7a-77c86fad9df0" containerID="26e49ca2e47060e6a843f15c74d04322e5caec39ae4e0416a9e352978bb57216" exitCode=0 Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.050078 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-792b-account-create-update-ddcxv" event={"ID":"484151e2-8ea8-4bfa-8f7a-77c86fad9df0","Type":"ContainerDied","Data":"26e49ca2e47060e6a843f15c74d04322e5caec39ae4e0416a9e352978bb57216"} Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.068218 4933 generic.go:334] "Generic (PLEG): container finished" podID="2e5be60e-fe89-43a0-b914-afbf646a6888" containerID="7fbdefa9b331b64acda196f42d688581c0f552728712316aaca99f6f0ff1d6e8" exitCode=143 Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.068370 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e5be60e-fe89-43a0-b914-afbf646a6888","Type":"ContainerDied","Data":"7fbdefa9b331b64acda196f42d688581c0f552728712316aaca99f6f0ff1d6e8"} Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.083056 4933 generic.go:334] "Generic (PLEG): container finished" podID="02e5b782-8d28-4206-aeb2-a9f1976abc8f" containerID="0dc2aa5e473292afd7f608803c2e5b66794ae41a16b9b9b1e26565bd52e3ab1b" exitCode=143 Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.083454 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"02e5b782-8d28-4206-aeb2-a9f1976abc8f","Type":"ContainerDied","Data":"0dc2aa5e473292afd7f608803c2e5b66794ae41a16b9b9b1e26565bd52e3ab1b"} Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.120482 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.135785 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.157400 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:21 crc kubenswrapper[4933]: E1201 09:53:21.157887 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="ceilometer-central-agent" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.157907 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="ceilometer-central-agent" Dec 01 09:53:21 crc kubenswrapper[4933]: E1201 09:53:21.157925 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="proxy-httpd" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.157942 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="proxy-httpd" Dec 01 09:53:21 crc kubenswrapper[4933]: E1201 09:53:21.157957 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="ceilometer-notification-agent" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.157963 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="ceilometer-notification-agent" Dec 01 09:53:21 crc kubenswrapper[4933]: E1201 09:53:21.158000 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="sg-core" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.158010 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="sg-core" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.158182 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="proxy-httpd" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.158196 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="sg-core" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.158211 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="ceilometer-central-agent" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.158234 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" containerName="ceilometer-notification-agent" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.159559 4933 scope.go:117] "RemoveContainer" containerID="d6fba986fe285c68996afb5d8af1a1d64036d504600fb4117ed5ef32a394f36a" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.162136 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.170065 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.170289 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.203558 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.246355 4933 scope.go:117] "RemoveContainer" containerID="3efef0ca223e351f022ab3ebaabb31644ea39a541cefeec945b3d1d590c9bc7b" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.256858 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab051ee-5581-45d7-8d3a-c721db35f4e7-run-httpd\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.256912 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.256948 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab051ee-5581-45d7-8d3a-c721db35f4e7-log-httpd\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.257013 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skt2m\" (UniqueName: \"kubernetes.io/projected/cab051ee-5581-45d7-8d3a-c721db35f4e7-kube-api-access-skt2m\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.257029 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-scripts\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.257064 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-config-data\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.257085 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.335018 4933 scope.go:117] "RemoveContainer" containerID="f34e8c843fcb29b1f86069f3ea981d4476178bb90f65505b44a78fd496034d51" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.377915 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-config-data\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.378002 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.378197 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab051ee-5581-45d7-8d3a-c721db35f4e7-run-httpd\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.378265 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.378382 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab051ee-5581-45d7-8d3a-c721db35f4e7-log-httpd\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.378560 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skt2m\" (UniqueName: \"kubernetes.io/projected/cab051ee-5581-45d7-8d3a-c721db35f4e7-kube-api-access-skt2m\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.378589 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-scripts\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.380606 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab051ee-5581-45d7-8d3a-c721db35f4e7-run-httpd\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.382248 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab051ee-5581-45d7-8d3a-c721db35f4e7-log-httpd\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.412076 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-scripts\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.412198 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.424509 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-config-data\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.426204 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skt2m\" (UniqueName: \"kubernetes.io/projected/cab051ee-5581-45d7-8d3a-c721db35f4e7-kube-api-access-skt2m\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.433545 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.491327 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-792b-account-create-update-ddcxv" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.512364 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.583606 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/484151e2-8ea8-4bfa-8f7a-77c86fad9df0-operator-scripts\") pod \"484151e2-8ea8-4bfa-8f7a-77c86fad9df0\" (UID: \"484151e2-8ea8-4bfa-8f7a-77c86fad9df0\") " Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.583669 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pdlj\" (UniqueName: \"kubernetes.io/projected/484151e2-8ea8-4bfa-8f7a-77c86fad9df0-kube-api-access-9pdlj\") pod \"484151e2-8ea8-4bfa-8f7a-77c86fad9df0\" (UID: \"484151e2-8ea8-4bfa-8f7a-77c86fad9df0\") " Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.584809 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/484151e2-8ea8-4bfa-8f7a-77c86fad9df0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "484151e2-8ea8-4bfa-8f7a-77c86fad9df0" (UID: "484151e2-8ea8-4bfa-8f7a-77c86fad9df0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.589737 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484151e2-8ea8-4bfa-8f7a-77c86fad9df0-kube-api-access-9pdlj" (OuterVolumeSpecName: "kube-api-access-9pdlj") pod "484151e2-8ea8-4bfa-8f7a-77c86fad9df0" (UID: "484151e2-8ea8-4bfa-8f7a-77c86fad9df0"). InnerVolumeSpecName "kube-api-access-9pdlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.689053 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/484151e2-8ea8-4bfa-8f7a-77c86fad9df0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.689089 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pdlj\" (UniqueName: \"kubernetes.io/projected/484151e2-8ea8-4bfa-8f7a-77c86fad9df0-kube-api-access-9pdlj\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.711542 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfc448c5-585f-43f9-bb98-c592a65c6fb9" path="/var/lib/kubelet/pods/dfc448c5-585f-43f9-bb98-c592a65c6fb9/volumes" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.977816 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-69ffd" Dec 01 09:53:21 crc kubenswrapper[4933]: I1201 09:53:21.985166 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47e6-account-create-update-mtm6v" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.003329 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m6pgw" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.010521 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-525pg" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.098283 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mm4n\" (UniqueName: \"kubernetes.io/projected/c9e4aba6-b83f-42c8-b38c-f5293f898400-kube-api-access-7mm4n\") pod \"c9e4aba6-b83f-42c8-b38c-f5293f898400\" (UID: \"c9e4aba6-b83f-42c8-b38c-f5293f898400\") " Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.098415 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e4aba6-b83f-42c8-b38c-f5293f898400-operator-scripts\") pod \"c9e4aba6-b83f-42c8-b38c-f5293f898400\" (UID: \"c9e4aba6-b83f-42c8-b38c-f5293f898400\") " Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.098443 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2smkl\" (UniqueName: \"kubernetes.io/projected/c15666fa-6b57-4765-98ce-4ffc163e1d49-kube-api-access-2smkl\") pod \"c15666fa-6b57-4765-98ce-4ffc163e1d49\" (UID: \"c15666fa-6b57-4765-98ce-4ffc163e1d49\") " Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.098491 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4nnm\" (UniqueName: \"kubernetes.io/projected/36ad6a62-9532-4080-bfad-4f464fa988b0-kube-api-access-k4nnm\") pod \"36ad6a62-9532-4080-bfad-4f464fa988b0\" (UID: \"36ad6a62-9532-4080-bfad-4f464fa988b0\") " Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.098545 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c15666fa-6b57-4765-98ce-4ffc163e1d49-operator-scripts\") pod \"c15666fa-6b57-4765-98ce-4ffc163e1d49\" (UID: \"c15666fa-6b57-4765-98ce-4ffc163e1d49\") " Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.098563 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36ad6a62-9532-4080-bfad-4f464fa988b0-operator-scripts\") pod \"36ad6a62-9532-4080-bfad-4f464fa988b0\" (UID: \"36ad6a62-9532-4080-bfad-4f464fa988b0\") " Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.098643 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5jlj\" (UniqueName: \"kubernetes.io/projected/a6d99846-3d21-461d-91c9-b4f52973fd73-kube-api-access-d5jlj\") pod \"a6d99846-3d21-461d-91c9-b4f52973fd73\" (UID: \"a6d99846-3d21-461d-91c9-b4f52973fd73\") " Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.098746 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6d99846-3d21-461d-91c9-b4f52973fd73-operator-scripts\") pod \"a6d99846-3d21-461d-91c9-b4f52973fd73\" (UID: \"a6d99846-3d21-461d-91c9-b4f52973fd73\") " Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.099388 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36ad6a62-9532-4080-bfad-4f464fa988b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36ad6a62-9532-4080-bfad-4f464fa988b0" (UID: "36ad6a62-9532-4080-bfad-4f464fa988b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.099723 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d99846-3d21-461d-91c9-b4f52973fd73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6d99846-3d21-461d-91c9-b4f52973fd73" (UID: "a6d99846-3d21-461d-91c9-b4f52973fd73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.099753 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15666fa-6b57-4765-98ce-4ffc163e1d49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c15666fa-6b57-4765-98ce-4ffc163e1d49" (UID: "c15666fa-6b57-4765-98ce-4ffc163e1d49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.099799 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e4aba6-b83f-42c8-b38c-f5293f898400-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9e4aba6-b83f-42c8-b38c-f5293f898400" (UID: "c9e4aba6-b83f-42c8-b38c-f5293f898400"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.106160 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d99846-3d21-461d-91c9-b4f52973fd73-kube-api-access-d5jlj" (OuterVolumeSpecName: "kube-api-access-d5jlj") pod "a6d99846-3d21-461d-91c9-b4f52973fd73" (UID: "a6d99846-3d21-461d-91c9-b4f52973fd73"). InnerVolumeSpecName "kube-api-access-d5jlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.106191 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m6pgw" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.106224 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15666fa-6b57-4765-98ce-4ffc163e1d49-kube-api-access-2smkl" (OuterVolumeSpecName: "kube-api-access-2smkl") pod "c15666fa-6b57-4765-98ce-4ffc163e1d49" (UID: "c15666fa-6b57-4765-98ce-4ffc163e1d49"). InnerVolumeSpecName "kube-api-access-2smkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.106414 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m6pgw" event={"ID":"c15666fa-6b57-4765-98ce-4ffc163e1d49","Type":"ContainerDied","Data":"c3014ce18b66d1d14d3f5f6aeaf9326941526587b6c6b7a8e2032ece4c2cbadf"} Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.106459 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3014ce18b66d1d14d3f5f6aeaf9326941526587b6c6b7a8e2032ece4c2cbadf" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.106924 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e4aba6-b83f-42c8-b38c-f5293f898400-kube-api-access-7mm4n" (OuterVolumeSpecName: "kube-api-access-7mm4n") pod "c9e4aba6-b83f-42c8-b38c-f5293f898400" (UID: "c9e4aba6-b83f-42c8-b38c-f5293f898400"). InnerVolumeSpecName "kube-api-access-7mm4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.110477 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-69ffd" event={"ID":"c9e4aba6-b83f-42c8-b38c-f5293f898400","Type":"ContainerDied","Data":"e030a62a2362a5ed4dc024cee29e0173865bce5372b33231538067bbe160148a"} Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.110754 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e030a62a2362a5ed4dc024cee29e0173865bce5372b33231538067bbe160148a" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.110842 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-69ffd" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.115023 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ad6a62-9532-4080-bfad-4f464fa988b0-kube-api-access-k4nnm" (OuterVolumeSpecName: "kube-api-access-k4nnm") pod "36ad6a62-9532-4080-bfad-4f464fa988b0" (UID: "36ad6a62-9532-4080-bfad-4f464fa988b0"). InnerVolumeSpecName "kube-api-access-k4nnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.116292 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47e6-account-create-update-mtm6v" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.116489 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-47e6-account-create-update-mtm6v" event={"ID":"36ad6a62-9532-4080-bfad-4f464fa988b0","Type":"ContainerDied","Data":"02c552eb73d67b9cd59d3a8e1919cc9277ecc649830a39988fb966f36832cff4"} Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.116992 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02c552eb73d67b9cd59d3a8e1919cc9277ecc649830a39988fb966f36832cff4" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.127175 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-792b-account-create-update-ddcxv" event={"ID":"484151e2-8ea8-4bfa-8f7a-77c86fad9df0","Type":"ContainerDied","Data":"7a6e34a4fcac539a9efd13ff5fbe88273ace28c970af2292897ed7ffff6ea8c0"} Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.127213 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a6e34a4fcac539a9efd13ff5fbe88273ace28c970af2292897ed7ffff6ea8c0" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.127549 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-792b-account-create-update-ddcxv" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.132607 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-525pg" event={"ID":"a6d99846-3d21-461d-91c9-b4f52973fd73","Type":"ContainerDied","Data":"7fb3fbec3a0a451dccd2b5e0e9df577a2c682470bee6f0f53e30b3759ad23e19"} Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.132645 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb3fbec3a0a451dccd2b5e0e9df577a2c682470bee6f0f53e30b3759ad23e19" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.132726 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-525pg" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.170751 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.204069 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c15666fa-6b57-4765-98ce-4ffc163e1d49-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.204109 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36ad6a62-9532-4080-bfad-4f464fa988b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.204126 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5jlj\" (UniqueName: \"kubernetes.io/projected/a6d99846-3d21-461d-91c9-b4f52973fd73-kube-api-access-d5jlj\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.204138 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6d99846-3d21-461d-91c9-b4f52973fd73-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.204153 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mm4n\" (UniqueName: \"kubernetes.io/projected/c9e4aba6-b83f-42c8-b38c-f5293f898400-kube-api-access-7mm4n\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.204163 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e4aba6-b83f-42c8-b38c-f5293f898400-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.204175 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2smkl\" (UniqueName: \"kubernetes.io/projected/c15666fa-6b57-4765-98ce-4ffc163e1d49-kube-api-access-2smkl\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.204187 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4nnm\" (UniqueName: \"kubernetes.io/projected/36ad6a62-9532-4080-bfad-4f464fa988b0-kube-api-access-k4nnm\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.443450 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-157a-account-create-update-bfxmv" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.511113 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e236a-dbe4-415d-bae9-945971a11083-operator-scripts\") pod \"4d1e236a-dbe4-415d-bae9-945971a11083\" (UID: \"4d1e236a-dbe4-415d-bae9-945971a11083\") " Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.511257 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xdkd\" (UniqueName: \"kubernetes.io/projected/4d1e236a-dbe4-415d-bae9-945971a11083-kube-api-access-4xdkd\") pod \"4d1e236a-dbe4-415d-bae9-945971a11083\" (UID: \"4d1e236a-dbe4-415d-bae9-945971a11083\") " Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.512697 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d1e236a-dbe4-415d-bae9-945971a11083-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d1e236a-dbe4-415d-bae9-945971a11083" (UID: "4d1e236a-dbe4-415d-bae9-945971a11083"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.517863 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.519160 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1e236a-dbe4-415d-bae9-945971a11083-kube-api-access-4xdkd" (OuterVolumeSpecName: "kube-api-access-4xdkd") pod "4d1e236a-dbe4-415d-bae9-945971a11083" (UID: "4d1e236a-dbe4-415d-bae9-945971a11083"). InnerVolumeSpecName "kube-api-access-4xdkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.613932 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xdkd\" (UniqueName: \"kubernetes.io/projected/4d1e236a-dbe4-415d-bae9-945971a11083-kube-api-access-4xdkd\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:22 crc kubenswrapper[4933]: I1201 09:53:22.613974 4933 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e236a-dbe4-415d-bae9-945971a11083-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:23 crc kubenswrapper[4933]: I1201 09:53:23.148156 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-157a-account-create-update-bfxmv" event={"ID":"4d1e236a-dbe4-415d-bae9-945971a11083","Type":"ContainerDied","Data":"5218af4833fd717b5c29224d4921305aee4363687d2f38f61075f213728bcdf4"} Dec 01 09:53:23 crc kubenswrapper[4933]: I1201 09:53:23.148448 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5218af4833fd717b5c29224d4921305aee4363687d2f38f61075f213728bcdf4" Dec 01 09:53:23 crc kubenswrapper[4933]: I1201 09:53:23.148476 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-157a-account-create-update-bfxmv" Dec 01 09:53:23 crc kubenswrapper[4933]: I1201 09:53:23.154498 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab051ee-5581-45d7-8d3a-c721db35f4e7","Type":"ContainerStarted","Data":"af2c475f900689660bfbd06ea20d3800edd837deb975d78e7bb085e6a03d5125"} Dec 01 09:53:23 crc kubenswrapper[4933]: I1201 09:53:23.154626 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab051ee-5581-45d7-8d3a-c721db35f4e7","Type":"ContainerStarted","Data":"3ba9d9f8c70381c1cf7507bd6f02a66b8597ad8557a43ab187529160b97b20e4"} Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.035689 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.150103 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02e5b782-8d28-4206-aeb2-a9f1976abc8f-httpd-run\") pod \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.150997 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6htr\" (UniqueName: \"kubernetes.io/projected/02e5b782-8d28-4206-aeb2-a9f1976abc8f-kube-api-access-t6htr\") pod \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.151943 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-scripts\") pod \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.150874 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02e5b782-8d28-4206-aeb2-a9f1976abc8f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "02e5b782-8d28-4206-aeb2-a9f1976abc8f" (UID: "02e5b782-8d28-4206-aeb2-a9f1976abc8f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.152496 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02e5b782-8d28-4206-aeb2-a9f1976abc8f-logs" (OuterVolumeSpecName: "logs") pod "02e5b782-8d28-4206-aeb2-a9f1976abc8f" (UID: "02e5b782-8d28-4206-aeb2-a9f1976abc8f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.152023 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02e5b782-8d28-4206-aeb2-a9f1976abc8f-logs\") pod \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.153148 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-public-tls-certs\") pod \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.153199 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-config-data\") pod \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.153355 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.153526 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-combined-ca-bundle\") pod \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\" (UID: \"02e5b782-8d28-4206-aeb2-a9f1976abc8f\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.156388 4933 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02e5b782-8d28-4206-aeb2-a9f1976abc8f-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.156424 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02e5b782-8d28-4206-aeb2-a9f1976abc8f-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.170700 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-scripts" (OuterVolumeSpecName: "scripts") pod "02e5b782-8d28-4206-aeb2-a9f1976abc8f" (UID: "02e5b782-8d28-4206-aeb2-a9f1976abc8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.200772 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab051ee-5581-45d7-8d3a-c721db35f4e7","Type":"ContainerStarted","Data":"c21e65f794d960f6609fbcb5c5d6416e60cf1574b9084b4bdc877436d85a2009"} Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.209683 4933 generic.go:334] "Generic (PLEG): container finished" podID="2e5be60e-fe89-43a0-b914-afbf646a6888" containerID="5c522b75ceeeaee2e8506acc018bdcf23120b042987be2fe9a13e94298e082a4" exitCode=0 Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.209766 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e5be60e-fe89-43a0-b914-afbf646a6888","Type":"ContainerDied","Data":"5c522b75ceeeaee2e8506acc018bdcf23120b042987be2fe9a13e94298e082a4"} Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.219393 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.219413 4933 generic.go:334] "Generic (PLEG): container finished" podID="02e5b782-8d28-4206-aeb2-a9f1976abc8f" containerID="71af3c1e997347f5888993bda334f7ecd26ddb62e1e0925557a88be906a17ba3" exitCode=0 Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.219424 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"02e5b782-8d28-4206-aeb2-a9f1976abc8f","Type":"ContainerDied","Data":"71af3c1e997347f5888993bda334f7ecd26ddb62e1e0925557a88be906a17ba3"} Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.219587 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"02e5b782-8d28-4206-aeb2-a9f1976abc8f","Type":"ContainerDied","Data":"5a4710bad4a9e244f24e0b209252004d33ede45ae3f23d26c040208c8de4242b"} Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.219625 4933 scope.go:117] "RemoveContainer" containerID="71af3c1e997347f5888993bda334f7ecd26ddb62e1e0925557a88be906a17ba3" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.219956 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02e5b782-8d28-4206-aeb2-a9f1976abc8f-kube-api-access-t6htr" (OuterVolumeSpecName: "kube-api-access-t6htr") pod "02e5b782-8d28-4206-aeb2-a9f1976abc8f" (UID: "02e5b782-8d28-4206-aeb2-a9f1976abc8f"). InnerVolumeSpecName "kube-api-access-t6htr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.221941 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "02e5b782-8d28-4206-aeb2-a9f1976abc8f" (UID: "02e5b782-8d28-4206-aeb2-a9f1976abc8f"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.241933 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02e5b782-8d28-4206-aeb2-a9f1976abc8f" (UID: "02e5b782-8d28-4206-aeb2-a9f1976abc8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.256911 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "02e5b782-8d28-4206-aeb2-a9f1976abc8f" (UID: "02e5b782-8d28-4206-aeb2-a9f1976abc8f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.260984 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.261018 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6htr\" (UniqueName: \"kubernetes.io/projected/02e5b782-8d28-4206-aeb2-a9f1976abc8f-kube-api-access-t6htr\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.261028 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.261036 4933 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.261069 4933 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.300874 4933 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.319099 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-config-data" (OuterVolumeSpecName: "config-data") pod "02e5b782-8d28-4206-aeb2-a9f1976abc8f" (UID: "02e5b782-8d28-4206-aeb2-a9f1976abc8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.329033 4933 scope.go:117] "RemoveContainer" containerID="0dc2aa5e473292afd7f608803c2e5b66794ae41a16b9b9b1e26565bd52e3ab1b" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.362965 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e5b782-8d28-4206-aeb2-a9f1976abc8f-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.363004 4933 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.367528 4933 scope.go:117] "RemoveContainer" containerID="71af3c1e997347f5888993bda334f7ecd26ddb62e1e0925557a88be906a17ba3" Dec 01 09:53:24 crc kubenswrapper[4933]: E1201 09:53:24.368774 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71af3c1e997347f5888993bda334f7ecd26ddb62e1e0925557a88be906a17ba3\": container with ID starting with 71af3c1e997347f5888993bda334f7ecd26ddb62e1e0925557a88be906a17ba3 not found: ID does not exist" containerID="71af3c1e997347f5888993bda334f7ecd26ddb62e1e0925557a88be906a17ba3" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.368814 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71af3c1e997347f5888993bda334f7ecd26ddb62e1e0925557a88be906a17ba3"} err="failed to get container status \"71af3c1e997347f5888993bda334f7ecd26ddb62e1e0925557a88be906a17ba3\": rpc error: code = NotFound desc = could not find container \"71af3c1e997347f5888993bda334f7ecd26ddb62e1e0925557a88be906a17ba3\": container with ID starting with 71af3c1e997347f5888993bda334f7ecd26ddb62e1e0925557a88be906a17ba3 not found: ID does not exist" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.368839 4933 scope.go:117] "RemoveContainer" containerID="0dc2aa5e473292afd7f608803c2e5b66794ae41a16b9b9b1e26565bd52e3ab1b" Dec 01 09:53:24 crc kubenswrapper[4933]: E1201 09:53:24.371431 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc2aa5e473292afd7f608803c2e5b66794ae41a16b9b9b1e26565bd52e3ab1b\": container with ID starting with 0dc2aa5e473292afd7f608803c2e5b66794ae41a16b9b9b1e26565bd52e3ab1b not found: ID does not exist" containerID="0dc2aa5e473292afd7f608803c2e5b66794ae41a16b9b9b1e26565bd52e3ab1b" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.371471 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc2aa5e473292afd7f608803c2e5b66794ae41a16b9b9b1e26565bd52e3ab1b"} err="failed to get container status \"0dc2aa5e473292afd7f608803c2e5b66794ae41a16b9b9b1e26565bd52e3ab1b\": rpc error: code = NotFound desc = could not find container \"0dc2aa5e473292afd7f608803c2e5b66794ae41a16b9b9b1e26565bd52e3ab1b\": container with ID starting with 0dc2aa5e473292afd7f608803c2e5b66794ae41a16b9b9b1e26565bd52e3ab1b not found: ID does not exist" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.578079 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.599085 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.643661 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:53:24 crc kubenswrapper[4933]: E1201 09:53:24.644257 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e5b782-8d28-4206-aeb2-a9f1976abc8f" containerName="glance-httpd" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644273 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e5b782-8d28-4206-aeb2-a9f1976abc8f" containerName="glance-httpd" Dec 01 09:53:24 crc kubenswrapper[4933]: E1201 09:53:24.644323 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1e236a-dbe4-415d-bae9-945971a11083" containerName="mariadb-account-create-update" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644332 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1e236a-dbe4-415d-bae9-945971a11083" containerName="mariadb-account-create-update" Dec 01 09:53:24 crc kubenswrapper[4933]: E1201 09:53:24.644357 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d99846-3d21-461d-91c9-b4f52973fd73" containerName="mariadb-database-create" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644366 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d99846-3d21-461d-91c9-b4f52973fd73" containerName="mariadb-database-create" Dec 01 09:53:24 crc kubenswrapper[4933]: E1201 09:53:24.644380 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e4aba6-b83f-42c8-b38c-f5293f898400" containerName="mariadb-database-create" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644389 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e4aba6-b83f-42c8-b38c-f5293f898400" containerName="mariadb-database-create" Dec 01 09:53:24 crc kubenswrapper[4933]: E1201 09:53:24.644403 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ad6a62-9532-4080-bfad-4f464fa988b0" containerName="mariadb-account-create-update" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644412 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ad6a62-9532-4080-bfad-4f464fa988b0" containerName="mariadb-account-create-update" Dec 01 09:53:24 crc kubenswrapper[4933]: E1201 09:53:24.644420 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15666fa-6b57-4765-98ce-4ffc163e1d49" containerName="mariadb-database-create" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644425 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15666fa-6b57-4765-98ce-4ffc163e1d49" containerName="mariadb-database-create" Dec 01 09:53:24 crc kubenswrapper[4933]: E1201 09:53:24.644441 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484151e2-8ea8-4bfa-8f7a-77c86fad9df0" containerName="mariadb-account-create-update" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644450 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="484151e2-8ea8-4bfa-8f7a-77c86fad9df0" containerName="mariadb-account-create-update" Dec 01 09:53:24 crc kubenswrapper[4933]: E1201 09:53:24.644466 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e5b782-8d28-4206-aeb2-a9f1976abc8f" containerName="glance-log" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644475 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e5b782-8d28-4206-aeb2-a9f1976abc8f" containerName="glance-log" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644734 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e4aba6-b83f-42c8-b38c-f5293f898400" containerName="mariadb-database-create" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644745 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1e236a-dbe4-415d-bae9-945971a11083" containerName="mariadb-account-create-update" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644761 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15666fa-6b57-4765-98ce-4ffc163e1d49" containerName="mariadb-database-create" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644771 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="484151e2-8ea8-4bfa-8f7a-77c86fad9df0" containerName="mariadb-account-create-update" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644786 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e5b782-8d28-4206-aeb2-a9f1976abc8f" containerName="glance-log" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644792 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d99846-3d21-461d-91c9-b4f52973fd73" containerName="mariadb-database-create" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644804 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ad6a62-9532-4080-bfad-4f464fa988b0" containerName="mariadb-account-create-update" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.644813 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e5b782-8d28-4206-aeb2-a9f1976abc8f" containerName="glance-httpd" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.646073 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.648112 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.650273 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.654020 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.654522 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.771848 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e5be60e-fe89-43a0-b914-afbf646a6888-logs\") pod \"2e5be60e-fe89-43a0-b914-afbf646a6888\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.771896 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"2e5be60e-fe89-43a0-b914-afbf646a6888\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.771986 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-internal-tls-certs\") pod \"2e5be60e-fe89-43a0-b914-afbf646a6888\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.772034 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e5be60e-fe89-43a0-b914-afbf646a6888-httpd-run\") pod \"2e5be60e-fe89-43a0-b914-afbf646a6888\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.772073 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-config-data\") pod \"2e5be60e-fe89-43a0-b914-afbf646a6888\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.772157 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-combined-ca-bundle\") pod \"2e5be60e-fe89-43a0-b914-afbf646a6888\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.772214 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-scripts\") pod \"2e5be60e-fe89-43a0-b914-afbf646a6888\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.772295 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsql4\" (UniqueName: \"kubernetes.io/projected/2e5be60e-fe89-43a0-b914-afbf646a6888-kube-api-access-nsql4\") pod \"2e5be60e-fe89-43a0-b914-afbf646a6888\" (UID: \"2e5be60e-fe89-43a0-b914-afbf646a6888\") " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.772711 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5be60e-fe89-43a0-b914-afbf646a6888-logs" (OuterVolumeSpecName: "logs") pod "2e5be60e-fe89-43a0-b914-afbf646a6888" (UID: "2e5be60e-fe89-43a0-b914-afbf646a6888"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.772740 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn9qm\" (UniqueName: \"kubernetes.io/projected/3ded59f9-1443-44e5-93d0-d6fbc126c384-kube-api-access-rn9qm\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.772830 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ded59f9-1443-44e5-93d0-d6fbc126c384-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.772865 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ded59f9-1443-44e5-93d0-d6fbc126c384-logs\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.772897 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ded59f9-1443-44e5-93d0-d6fbc126c384-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.772928 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.772927 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5be60e-fe89-43a0-b914-afbf646a6888-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2e5be60e-fe89-43a0-b914-afbf646a6888" (UID: "2e5be60e-fe89-43a0-b914-afbf646a6888"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.772965 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ded59f9-1443-44e5-93d0-d6fbc126c384-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.773076 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ded59f9-1443-44e5-93d0-d6fbc126c384-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.773145 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ded59f9-1443-44e5-93d0-d6fbc126c384-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.775448 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e5be60e-fe89-43a0-b914-afbf646a6888-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.775570 4933 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e5be60e-fe89-43a0-b914-afbf646a6888-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.783734 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "2e5be60e-fe89-43a0-b914-afbf646a6888" (UID: "2e5be60e-fe89-43a0-b914-afbf646a6888"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.783936 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-scripts" (OuterVolumeSpecName: "scripts") pod "2e5be60e-fe89-43a0-b914-afbf646a6888" (UID: "2e5be60e-fe89-43a0-b914-afbf646a6888"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.789568 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5be60e-fe89-43a0-b914-afbf646a6888-kube-api-access-nsql4" (OuterVolumeSpecName: "kube-api-access-nsql4") pod "2e5be60e-fe89-43a0-b914-afbf646a6888" (UID: "2e5be60e-fe89-43a0-b914-afbf646a6888"). InnerVolumeSpecName "kube-api-access-nsql4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.825096 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e5be60e-fe89-43a0-b914-afbf646a6888" (UID: "2e5be60e-fe89-43a0-b914-afbf646a6888"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.860511 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2e5be60e-fe89-43a0-b914-afbf646a6888" (UID: "2e5be60e-fe89-43a0-b914-afbf646a6888"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.873505 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-config-data" (OuterVolumeSpecName: "config-data") pod "2e5be60e-fe89-43a0-b914-afbf646a6888" (UID: "2e5be60e-fe89-43a0-b914-afbf646a6888"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.877540 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ded59f9-1443-44e5-93d0-d6fbc126c384-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.877624 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ded59f9-1443-44e5-93d0-d6fbc126c384-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.877695 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn9qm\" (UniqueName: \"kubernetes.io/projected/3ded59f9-1443-44e5-93d0-d6fbc126c384-kube-api-access-rn9qm\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.877750 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ded59f9-1443-44e5-93d0-d6fbc126c384-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.877782 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ded59f9-1443-44e5-93d0-d6fbc126c384-logs\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.877811 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ded59f9-1443-44e5-93d0-d6fbc126c384-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.877838 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.877885 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ded59f9-1443-44e5-93d0-d6fbc126c384-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.877964 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.877981 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.877997 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsql4\" (UniqueName: \"kubernetes.io/projected/2e5be60e-fe89-43a0-b914-afbf646a6888-kube-api-access-nsql4\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.878026 4933 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.878038 4933 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.878051 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5be60e-fe89-43a0-b914-afbf646a6888-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.879181 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.880067 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ded59f9-1443-44e5-93d0-d6fbc126c384-logs\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.880346 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ded59f9-1443-44e5-93d0-d6fbc126c384-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.890380 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ded59f9-1443-44e5-93d0-d6fbc126c384-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.898148 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ded59f9-1443-44e5-93d0-d6fbc126c384-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.906521 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ded59f9-1443-44e5-93d0-d6fbc126c384-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.914068 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn9qm\" (UniqueName: \"kubernetes.io/projected/3ded59f9-1443-44e5-93d0-d6fbc126c384-kube-api-access-rn9qm\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.938101 4933 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.943550 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ded59f9-1443-44e5-93d0-d6fbc126c384-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.961539 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"3ded59f9-1443-44e5-93d0-d6fbc126c384\") " pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.972275 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 09:53:24 crc kubenswrapper[4933]: I1201 09:53:24.981837 4933 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.249030 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e5be60e-fe89-43a0-b914-afbf646a6888","Type":"ContainerDied","Data":"6bdd545539f296903513e10cc1777f74d9ac84b27f971a3694756f4888e3af8e"} Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.249101 4933 scope.go:117] "RemoveContainer" containerID="5c522b75ceeeaee2e8506acc018bdcf23120b042987be2fe9a13e94298e082a4" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.249233 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.268644 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab051ee-5581-45d7-8d3a-c721db35f4e7","Type":"ContainerStarted","Data":"7e7003a6f45643e4b606112da4fb05720d97137b2b999dff4a5acf3f722d781a"} Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.308461 4933 scope.go:117] "RemoveContainer" containerID="7fbdefa9b331b64acda196f42d688581c0f552728712316aaca99f6f0ff1d6e8" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.333163 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.394943 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.412459 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:53:25 crc kubenswrapper[4933]: E1201 09:53:25.413509 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5be60e-fe89-43a0-b914-afbf646a6888" containerName="glance-log" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.413534 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5be60e-fe89-43a0-b914-afbf646a6888" containerName="glance-log" Dec 01 09:53:25 crc kubenswrapper[4933]: E1201 09:53:25.413558 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5be60e-fe89-43a0-b914-afbf646a6888" containerName="glance-httpd" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.413565 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5be60e-fe89-43a0-b914-afbf646a6888" containerName="glance-httpd" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.413968 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5be60e-fe89-43a0-b914-afbf646a6888" containerName="glance-log" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.414005 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5be60e-fe89-43a0-b914-afbf646a6888" containerName="glance-httpd" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.415758 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.429924 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.432767 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.461259 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.496412 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45de9ced-9212-422d-9433-2a543d75f37f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.496485 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45de9ced-9212-422d-9433-2a543d75f37f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.496567 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.496611 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45de9ced-9212-422d-9433-2a543d75f37f-logs\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.496676 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7sr7\" (UniqueName: \"kubernetes.io/projected/45de9ced-9212-422d-9433-2a543d75f37f-kube-api-access-n7sr7\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.496754 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45de9ced-9212-422d-9433-2a543d75f37f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.496785 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45de9ced-9212-422d-9433-2a543d75f37f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.496863 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45de9ced-9212-422d-9433-2a543d75f37f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.598898 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45de9ced-9212-422d-9433-2a543d75f37f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.599885 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45de9ced-9212-422d-9433-2a543d75f37f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.600013 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45de9ced-9212-422d-9433-2a543d75f37f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.600174 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.600335 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45de9ced-9212-422d-9433-2a543d75f37f-logs\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.600491 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7sr7\" (UniqueName: \"kubernetes.io/projected/45de9ced-9212-422d-9433-2a543d75f37f-kube-api-access-n7sr7\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.600575 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45de9ced-9212-422d-9433-2a543d75f37f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.600643 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45de9ced-9212-422d-9433-2a543d75f37f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.600760 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45de9ced-9212-422d-9433-2a543d75f37f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.600996 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45de9ced-9212-422d-9433-2a543d75f37f-logs\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.601608 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.609833 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45de9ced-9212-422d-9433-2a543d75f37f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.612130 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45de9ced-9212-422d-9433-2a543d75f37f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.616197 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45de9ced-9212-422d-9433-2a543d75f37f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.618943 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45de9ced-9212-422d-9433-2a543d75f37f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.645759 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7sr7\" (UniqueName: \"kubernetes.io/projected/45de9ced-9212-422d-9433-2a543d75f37f-kube-api-access-n7sr7\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.671478 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"45de9ced-9212-422d-9433-2a543d75f37f\") " pod="openstack/glance-default-internal-api-0" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.682899 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02e5b782-8d28-4206-aeb2-a9f1976abc8f" path="/var/lib/kubelet/pods/02e5b782-8d28-4206-aeb2-a9f1976abc8f/volumes" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.684253 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e5be60e-fe89-43a0-b914-afbf646a6888" path="/var/lib/kubelet/pods/2e5be60e-fe89-43a0-b914-afbf646a6888/volumes" Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.722064 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 09:53:25 crc kubenswrapper[4933]: W1201 09:53:25.742736 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ded59f9_1443_44e5_93d0_d6fbc126c384.slice/crio-cb6ee671e544a745e8683b5382531fb67368e162d1951d50a22ba59d30ce6982 WatchSource:0}: Error finding container cb6ee671e544a745e8683b5382531fb67368e162d1951d50a22ba59d30ce6982: Status 404 returned error can't find the container with id cb6ee671e544a745e8683b5382531fb67368e162d1951d50a22ba59d30ce6982 Dec 01 09:53:25 crc kubenswrapper[4933]: I1201 09:53:25.748820 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 09:53:26 crc kubenswrapper[4933]: I1201 09:53:26.288887 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab051ee-5581-45d7-8d3a-c721db35f4e7","Type":"ContainerStarted","Data":"06c51d14fca96aa7d5261b2d1bbcb11237c34b61c7cfc75d1d16cbb4a72eafac"} Dec 01 09:53:26 crc kubenswrapper[4933]: I1201 09:53:26.289832 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 09:53:26 crc kubenswrapper[4933]: I1201 09:53:26.289424 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="ceilometer-notification-agent" containerID="cri-o://c21e65f794d960f6609fbcb5c5d6416e60cf1574b9084b4bdc877436d85a2009" gracePeriod=30 Dec 01 09:53:26 crc kubenswrapper[4933]: I1201 09:53:26.289295 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="ceilometer-central-agent" containerID="cri-o://af2c475f900689660bfbd06ea20d3800edd837deb975d78e7bb085e6a03d5125" gracePeriod=30 Dec 01 09:53:26 crc kubenswrapper[4933]: I1201 09:53:26.289435 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="proxy-httpd" containerID="cri-o://06c51d14fca96aa7d5261b2d1bbcb11237c34b61c7cfc75d1d16cbb4a72eafac" gracePeriod=30 Dec 01 09:53:26 crc kubenswrapper[4933]: I1201 09:53:26.289451 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="sg-core" containerID="cri-o://7e7003a6f45643e4b606112da4fb05720d97137b2b999dff4a5acf3f722d781a" gracePeriod=30 Dec 01 09:53:26 crc kubenswrapper[4933]: I1201 09:53:26.310225 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ded59f9-1443-44e5-93d0-d6fbc126c384","Type":"ContainerStarted","Data":"cb6ee671e544a745e8683b5382531fb67368e162d1951d50a22ba59d30ce6982"} Dec 01 09:53:26 crc kubenswrapper[4933]: I1201 09:53:26.314801 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.96787994 podStartE2EDuration="5.314778635s" podCreationTimestamp="2025-12-01 09:53:21 +0000 UTC" firstStartedPulling="2025-12-01 09:53:22.190194796 +0000 UTC m=+1292.831918421" lastFinishedPulling="2025-12-01 09:53:25.537093501 +0000 UTC m=+1296.178817116" observedRunningTime="2025-12-01 09:53:26.311570837 +0000 UTC m=+1296.953294472" watchObservedRunningTime="2025-12-01 09:53:26.314778635 +0000 UTC m=+1296.956502250" Dec 01 09:53:26 crc kubenswrapper[4933]: I1201 09:53:26.363338 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 09:53:27 crc kubenswrapper[4933]: I1201 09:53:27.331777 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ded59f9-1443-44e5-93d0-d6fbc126c384","Type":"ContainerStarted","Data":"a64e31a30597c873e0139009607619f67b9296cd1706e00c215853b7d176eaea"} Dec 01 09:53:27 crc kubenswrapper[4933]: I1201 09:53:27.337148 4933 generic.go:334] "Generic (PLEG): container finished" podID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerID="06c51d14fca96aa7d5261b2d1bbcb11237c34b61c7cfc75d1d16cbb4a72eafac" exitCode=0 Dec 01 09:53:27 crc kubenswrapper[4933]: I1201 09:53:27.337212 4933 generic.go:334] "Generic (PLEG): container finished" podID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerID="7e7003a6f45643e4b606112da4fb05720d97137b2b999dff4a5acf3f722d781a" exitCode=2 Dec 01 09:53:27 crc kubenswrapper[4933]: I1201 09:53:27.337229 4933 generic.go:334] "Generic (PLEG): container finished" podID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerID="c21e65f794d960f6609fbcb5c5d6416e60cf1574b9084b4bdc877436d85a2009" exitCode=0 Dec 01 09:53:27 crc kubenswrapper[4933]: I1201 09:53:27.337226 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab051ee-5581-45d7-8d3a-c721db35f4e7","Type":"ContainerDied","Data":"06c51d14fca96aa7d5261b2d1bbcb11237c34b61c7cfc75d1d16cbb4a72eafac"} Dec 01 09:53:27 crc kubenswrapper[4933]: I1201 09:53:27.337290 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab051ee-5581-45d7-8d3a-c721db35f4e7","Type":"ContainerDied","Data":"7e7003a6f45643e4b606112da4fb05720d97137b2b999dff4a5acf3f722d781a"} Dec 01 09:53:27 crc kubenswrapper[4933]: I1201 09:53:27.337327 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab051ee-5581-45d7-8d3a-c721db35f4e7","Type":"ContainerDied","Data":"c21e65f794d960f6609fbcb5c5d6416e60cf1574b9084b4bdc877436d85a2009"} Dec 01 09:53:27 crc kubenswrapper[4933]: I1201 09:53:27.340149 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"45de9ced-9212-422d-9433-2a543d75f37f","Type":"ContainerStarted","Data":"ff92ee6d0de5738b793c22cdeb21668b4629c2a41a0be1577bfc10590ba5ee44"} Dec 01 09:53:27 crc kubenswrapper[4933]: I1201 09:53:27.340209 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"45de9ced-9212-422d-9433-2a543d75f37f","Type":"ContainerStarted","Data":"9aaf30d075f6d45e2c09edcca461234df44b8642381032985b8fd71b834debeb"} Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.079239 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xzlnq"] Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.081585 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.084508 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m25sm" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.085326 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.085859 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.106607 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xzlnq"] Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.165139 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrk7f\" (UniqueName: \"kubernetes.io/projected/9427df5f-7233-4b84-b1c7-1567a9b686df-kube-api-access-hrk7f\") pod \"nova-cell0-conductor-db-sync-xzlnq\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.165224 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-config-data\") pod \"nova-cell0-conductor-db-sync-xzlnq\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.165379 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-scripts\") pod \"nova-cell0-conductor-db-sync-xzlnq\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.165465 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xzlnq\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.267112 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-config-data\") pod \"nova-cell0-conductor-db-sync-xzlnq\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.267405 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-scripts\") pod \"nova-cell0-conductor-db-sync-xzlnq\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.267434 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xzlnq\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.267481 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrk7f\" (UniqueName: \"kubernetes.io/projected/9427df5f-7233-4b84-b1c7-1567a9b686df-kube-api-access-hrk7f\") pod \"nova-cell0-conductor-db-sync-xzlnq\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.275164 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xzlnq\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.275324 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-config-data\") pod \"nova-cell0-conductor-db-sync-xzlnq\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.285228 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-scripts\") pod \"nova-cell0-conductor-db-sync-xzlnq\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.288619 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrk7f\" (UniqueName: \"kubernetes.io/projected/9427df5f-7233-4b84-b1c7-1567a9b686df-kube-api-access-hrk7f\") pod \"nova-cell0-conductor-db-sync-xzlnq\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.371620 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"45de9ced-9212-422d-9433-2a543d75f37f","Type":"ContainerStarted","Data":"436676ed8bfe383655d04d43b2b96215ac2193736882817c74a128588847c14f"} Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.377793 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ded59f9-1443-44e5-93d0-d6fbc126c384","Type":"ContainerStarted","Data":"9f7fe5a8d51aa3bcefc0caf090f71173436e08449afd21ca26a01aa6dbb542b0"} Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.410950 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.410907395 podStartE2EDuration="3.410907395s" podCreationTimestamp="2025-12-01 09:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:53:28.405150224 +0000 UTC m=+1299.046873829" watchObservedRunningTime="2025-12-01 09:53:28.410907395 +0000 UTC m=+1299.052631010" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.412861 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.451591 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.451550849 podStartE2EDuration="4.451550849s" podCreationTimestamp="2025-12-01 09:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:53:28.42992285 +0000 UTC m=+1299.071646475" watchObservedRunningTime="2025-12-01 09:53:28.451550849 +0000 UTC m=+1299.093274464" Dec 01 09:53:28 crc kubenswrapper[4933]: I1201 09:53:28.923620 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xzlnq"] Dec 01 09:53:29 crc kubenswrapper[4933]: I1201 09:53:29.060611 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6775f97bdb-vs7m8" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 01 09:53:29 crc kubenswrapper[4933]: I1201 09:53:29.390053 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xzlnq" event={"ID":"9427df5f-7233-4b84-b1c7-1567a9b686df","Type":"ContainerStarted","Data":"716ce5f0d0de0c98530a0f2339a99d952430c289d1989dc39ef8df00ddeacb03"} Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.333614 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.417835 4933 generic.go:334] "Generic (PLEG): container finished" podID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerID="af2c475f900689660bfbd06ea20d3800edd837deb975d78e7bb085e6a03d5125" exitCode=0 Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.417913 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab051ee-5581-45d7-8d3a-c721db35f4e7","Type":"ContainerDied","Data":"af2c475f900689660bfbd06ea20d3800edd837deb975d78e7bb085e6a03d5125"} Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.417976 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab051ee-5581-45d7-8d3a-c721db35f4e7","Type":"ContainerDied","Data":"3ba9d9f8c70381c1cf7507bd6f02a66b8597ad8557a43ab187529160b97b20e4"} Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.417974 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.418005 4933 scope.go:117] "RemoveContainer" containerID="06c51d14fca96aa7d5261b2d1bbcb11237c34b61c7cfc75d1d16cbb4a72eafac" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.450902 4933 scope.go:117] "RemoveContainer" containerID="7e7003a6f45643e4b606112da4fb05720d97137b2b999dff4a5acf3f722d781a" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.500614 4933 scope.go:117] "RemoveContainer" containerID="c21e65f794d960f6609fbcb5c5d6416e60cf1574b9084b4bdc877436d85a2009" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.518374 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-combined-ca-bundle\") pod \"cab051ee-5581-45d7-8d3a-c721db35f4e7\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.518563 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab051ee-5581-45d7-8d3a-c721db35f4e7-run-httpd\") pod \"cab051ee-5581-45d7-8d3a-c721db35f4e7\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.518672 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-scripts\") pod \"cab051ee-5581-45d7-8d3a-c721db35f4e7\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.518725 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-config-data\") pod \"cab051ee-5581-45d7-8d3a-c721db35f4e7\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.518873 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-sg-core-conf-yaml\") pod \"cab051ee-5581-45d7-8d3a-c721db35f4e7\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.519018 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skt2m\" (UniqueName: \"kubernetes.io/projected/cab051ee-5581-45d7-8d3a-c721db35f4e7-kube-api-access-skt2m\") pod \"cab051ee-5581-45d7-8d3a-c721db35f4e7\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.519071 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab051ee-5581-45d7-8d3a-c721db35f4e7-log-httpd\") pod \"cab051ee-5581-45d7-8d3a-c721db35f4e7\" (UID: \"cab051ee-5581-45d7-8d3a-c721db35f4e7\") " Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.519760 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cab051ee-5581-45d7-8d3a-c721db35f4e7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cab051ee-5581-45d7-8d3a-c721db35f4e7" (UID: "cab051ee-5581-45d7-8d3a-c721db35f4e7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.520264 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cab051ee-5581-45d7-8d3a-c721db35f4e7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cab051ee-5581-45d7-8d3a-c721db35f4e7" (UID: "cab051ee-5581-45d7-8d3a-c721db35f4e7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.530401 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-scripts" (OuterVolumeSpecName: "scripts") pod "cab051ee-5581-45d7-8d3a-c721db35f4e7" (UID: "cab051ee-5581-45d7-8d3a-c721db35f4e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.530402 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab051ee-5581-45d7-8d3a-c721db35f4e7-kube-api-access-skt2m" (OuterVolumeSpecName: "kube-api-access-skt2m") pod "cab051ee-5581-45d7-8d3a-c721db35f4e7" (UID: "cab051ee-5581-45d7-8d3a-c721db35f4e7"). InnerVolumeSpecName "kube-api-access-skt2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.531813 4933 scope.go:117] "RemoveContainer" containerID="af2c475f900689660bfbd06ea20d3800edd837deb975d78e7bb085e6a03d5125" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.573875 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cab051ee-5581-45d7-8d3a-c721db35f4e7" (UID: "cab051ee-5581-45d7-8d3a-c721db35f4e7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.613951 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cab051ee-5581-45d7-8d3a-c721db35f4e7" (UID: "cab051ee-5581-45d7-8d3a-c721db35f4e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.621563 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.621611 4933 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.621628 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skt2m\" (UniqueName: \"kubernetes.io/projected/cab051ee-5581-45d7-8d3a-c721db35f4e7-kube-api-access-skt2m\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.621640 4933 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab051ee-5581-45d7-8d3a-c721db35f4e7-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.621652 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.621664 4933 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab051ee-5581-45d7-8d3a-c721db35f4e7-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.634101 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-config-data" (OuterVolumeSpecName: "config-data") pod "cab051ee-5581-45d7-8d3a-c721db35f4e7" (UID: "cab051ee-5581-45d7-8d3a-c721db35f4e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.643856 4933 scope.go:117] "RemoveContainer" containerID="06c51d14fca96aa7d5261b2d1bbcb11237c34b61c7cfc75d1d16cbb4a72eafac" Dec 01 09:53:30 crc kubenswrapper[4933]: E1201 09:53:30.644465 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c51d14fca96aa7d5261b2d1bbcb11237c34b61c7cfc75d1d16cbb4a72eafac\": container with ID starting with 06c51d14fca96aa7d5261b2d1bbcb11237c34b61c7cfc75d1d16cbb4a72eafac not found: ID does not exist" containerID="06c51d14fca96aa7d5261b2d1bbcb11237c34b61c7cfc75d1d16cbb4a72eafac" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.644509 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c51d14fca96aa7d5261b2d1bbcb11237c34b61c7cfc75d1d16cbb4a72eafac"} err="failed to get container status \"06c51d14fca96aa7d5261b2d1bbcb11237c34b61c7cfc75d1d16cbb4a72eafac\": rpc error: code = NotFound desc = could not find container \"06c51d14fca96aa7d5261b2d1bbcb11237c34b61c7cfc75d1d16cbb4a72eafac\": container with ID starting with 06c51d14fca96aa7d5261b2d1bbcb11237c34b61c7cfc75d1d16cbb4a72eafac not found: ID does not exist" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.644546 4933 scope.go:117] "RemoveContainer" containerID="7e7003a6f45643e4b606112da4fb05720d97137b2b999dff4a5acf3f722d781a" Dec 01 09:53:30 crc kubenswrapper[4933]: E1201 09:53:30.644909 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e7003a6f45643e4b606112da4fb05720d97137b2b999dff4a5acf3f722d781a\": container with ID starting with 7e7003a6f45643e4b606112da4fb05720d97137b2b999dff4a5acf3f722d781a not found: ID does not exist" containerID="7e7003a6f45643e4b606112da4fb05720d97137b2b999dff4a5acf3f722d781a" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.644987 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e7003a6f45643e4b606112da4fb05720d97137b2b999dff4a5acf3f722d781a"} err="failed to get container status \"7e7003a6f45643e4b606112da4fb05720d97137b2b999dff4a5acf3f722d781a\": rpc error: code = NotFound desc = could not find container \"7e7003a6f45643e4b606112da4fb05720d97137b2b999dff4a5acf3f722d781a\": container with ID starting with 7e7003a6f45643e4b606112da4fb05720d97137b2b999dff4a5acf3f722d781a not found: ID does not exist" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.645030 4933 scope.go:117] "RemoveContainer" containerID="c21e65f794d960f6609fbcb5c5d6416e60cf1574b9084b4bdc877436d85a2009" Dec 01 09:53:30 crc kubenswrapper[4933]: E1201 09:53:30.645632 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c21e65f794d960f6609fbcb5c5d6416e60cf1574b9084b4bdc877436d85a2009\": container with ID starting with c21e65f794d960f6609fbcb5c5d6416e60cf1574b9084b4bdc877436d85a2009 not found: ID does not exist" containerID="c21e65f794d960f6609fbcb5c5d6416e60cf1574b9084b4bdc877436d85a2009" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.645667 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21e65f794d960f6609fbcb5c5d6416e60cf1574b9084b4bdc877436d85a2009"} err="failed to get container status \"c21e65f794d960f6609fbcb5c5d6416e60cf1574b9084b4bdc877436d85a2009\": rpc error: code = NotFound desc = could not find container \"c21e65f794d960f6609fbcb5c5d6416e60cf1574b9084b4bdc877436d85a2009\": container with ID starting with c21e65f794d960f6609fbcb5c5d6416e60cf1574b9084b4bdc877436d85a2009 not found: ID does not exist" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.645684 4933 scope.go:117] "RemoveContainer" containerID="af2c475f900689660bfbd06ea20d3800edd837deb975d78e7bb085e6a03d5125" Dec 01 09:53:30 crc kubenswrapper[4933]: E1201 09:53:30.646063 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af2c475f900689660bfbd06ea20d3800edd837deb975d78e7bb085e6a03d5125\": container with ID starting with af2c475f900689660bfbd06ea20d3800edd837deb975d78e7bb085e6a03d5125 not found: ID does not exist" containerID="af2c475f900689660bfbd06ea20d3800edd837deb975d78e7bb085e6a03d5125" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.646110 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af2c475f900689660bfbd06ea20d3800edd837deb975d78e7bb085e6a03d5125"} err="failed to get container status \"af2c475f900689660bfbd06ea20d3800edd837deb975d78e7bb085e6a03d5125\": rpc error: code = NotFound desc = could not find container \"af2c475f900689660bfbd06ea20d3800edd837deb975d78e7bb085e6a03d5125\": container with ID starting with af2c475f900689660bfbd06ea20d3800edd837deb975d78e7bb085e6a03d5125 not found: ID does not exist" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.724113 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab051ee-5581-45d7-8d3a-c721db35f4e7-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.760718 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.775168 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.790139 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:30 crc kubenswrapper[4933]: E1201 09:53:30.790686 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="ceilometer-notification-agent" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.790708 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="ceilometer-notification-agent" Dec 01 09:53:30 crc kubenswrapper[4933]: E1201 09:53:30.790741 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="sg-core" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.790747 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="sg-core" Dec 01 09:53:30 crc kubenswrapper[4933]: E1201 09:53:30.790766 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="ceilometer-central-agent" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.790775 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="ceilometer-central-agent" Dec 01 09:53:30 crc kubenswrapper[4933]: E1201 09:53:30.790788 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="proxy-httpd" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.790797 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="proxy-httpd" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.791065 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="ceilometer-central-agent" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.791176 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="ceilometer-notification-agent" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.791196 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="sg-core" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.791208 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" containerName="proxy-httpd" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.794893 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.797825 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.798291 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.812397 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.929317 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755015a8-ffb0-477f-bcbe-2db30cae8c43-log-httpd\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.930197 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.930441 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-config-data\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.930579 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755015a8-ffb0-477f-bcbe-2db30cae8c43-run-httpd\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.930756 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.930946 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvwrn\" (UniqueName: \"kubernetes.io/projected/755015a8-ffb0-477f-bcbe-2db30cae8c43-kube-api-access-hvwrn\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:30 crc kubenswrapper[4933]: I1201 09:53:30.931127 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-scripts\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.033561 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-scripts\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.033651 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755015a8-ffb0-477f-bcbe-2db30cae8c43-log-httpd\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.033718 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.033751 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-config-data\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.033773 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755015a8-ffb0-477f-bcbe-2db30cae8c43-run-httpd\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.033823 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.033885 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvwrn\" (UniqueName: \"kubernetes.io/projected/755015a8-ffb0-477f-bcbe-2db30cae8c43-kube-api-access-hvwrn\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.035100 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755015a8-ffb0-477f-bcbe-2db30cae8c43-run-httpd\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.035297 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755015a8-ffb0-477f-bcbe-2db30cae8c43-log-httpd\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.040123 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.041609 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-scripts\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.043816 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-config-data\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.044073 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.058111 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvwrn\" (UniqueName: \"kubernetes.io/projected/755015a8-ffb0-477f-bcbe-2db30cae8c43-kube-api-access-hvwrn\") pod \"ceilometer-0\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.124126 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.624236 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:31 crc kubenswrapper[4933]: W1201 09:53:31.636015 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod755015a8_ffb0_477f_bcbe_2db30cae8c43.slice/crio-213a80db770065a8a6496dcc935be1f50e20bb05a1b78357e2f3ddc47ece7e34 WatchSource:0}: Error finding container 213a80db770065a8a6496dcc935be1f50e20bb05a1b78357e2f3ddc47ece7e34: Status 404 returned error can't find the container with id 213a80db770065a8a6496dcc935be1f50e20bb05a1b78357e2f3ddc47ece7e34 Dec 01 09:53:31 crc kubenswrapper[4933]: I1201 09:53:31.683506 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cab051ee-5581-45d7-8d3a-c721db35f4e7" path="/var/lib/kubelet/pods/cab051ee-5581-45d7-8d3a-c721db35f4e7/volumes" Dec 01 09:53:32 crc kubenswrapper[4933]: I1201 09:53:32.455149 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755015a8-ffb0-477f-bcbe-2db30cae8c43","Type":"ContainerStarted","Data":"213a80db770065a8a6496dcc935be1f50e20bb05a1b78357e2f3ddc47ece7e34"} Dec 01 09:53:34 crc kubenswrapper[4933]: I1201 09:53:34.974371 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 09:53:34 crc kubenswrapper[4933]: I1201 09:53:34.974808 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 09:53:35 crc kubenswrapper[4933]: I1201 09:53:35.015840 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 09:53:35 crc kubenswrapper[4933]: I1201 09:53:35.027468 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 09:53:35 crc kubenswrapper[4933]: I1201 09:53:35.507224 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 09:53:35 crc kubenswrapper[4933]: I1201 09:53:35.507280 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 09:53:35 crc kubenswrapper[4933]: I1201 09:53:35.749905 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 09:53:35 crc kubenswrapper[4933]: I1201 09:53:35.750538 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 09:53:35 crc kubenswrapper[4933]: I1201 09:53:35.792172 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 09:53:35 crc kubenswrapper[4933]: I1201 09:53:35.806488 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 09:53:36 crc kubenswrapper[4933]: I1201 09:53:36.521293 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 09:53:36 crc kubenswrapper[4933]: I1201 09:53:36.521975 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 09:53:38 crc kubenswrapper[4933]: I1201 09:53:38.049141 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 09:53:38 crc kubenswrapper[4933]: I1201 09:53:38.050195 4933 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 09:53:38 crc kubenswrapper[4933]: I1201 09:53:38.282780 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 09:53:38 crc kubenswrapper[4933]: I1201 09:53:38.550545 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755015a8-ffb0-477f-bcbe-2db30cae8c43","Type":"ContainerStarted","Data":"cb867ddfc27dc847003421c7ddcfee0c8faf414fd6752eb7f7e8bcf07fed22f5"} Dec 01 09:53:38 crc kubenswrapper[4933]: I1201 09:53:38.553183 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xzlnq" event={"ID":"9427df5f-7233-4b84-b1c7-1567a9b686df","Type":"ContainerStarted","Data":"2008d28709e4d35910b136e95a746ebcf6b19cd0952ac1cf45cfe90e36235d13"} Dec 01 09:53:38 crc kubenswrapper[4933]: I1201 09:53:38.589136 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xzlnq" podStartSLOduration=1.922812681 podStartE2EDuration="10.589100768s" podCreationTimestamp="2025-12-01 09:53:28 +0000 UTC" firstStartedPulling="2025-12-01 09:53:28.936279111 +0000 UTC m=+1299.578002726" lastFinishedPulling="2025-12-01 09:53:37.602567198 +0000 UTC m=+1308.244290813" observedRunningTime="2025-12-01 09:53:38.57650042 +0000 UTC m=+1309.218224055" watchObservedRunningTime="2025-12-01 09:53:38.589100768 +0000 UTC m=+1309.230824383" Dec 01 09:53:38 crc kubenswrapper[4933]: I1201 09:53:38.673188 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:38 crc kubenswrapper[4933]: I1201 09:53:38.862542 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 09:53:38 crc kubenswrapper[4933]: I1201 09:53:38.862698 4933 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 09:53:39 crc kubenswrapper[4933]: I1201 09:53:39.060651 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6775f97bdb-vs7m8" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 01 09:53:39 crc kubenswrapper[4933]: I1201 09:53:39.246089 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 09:53:39 crc kubenswrapper[4933]: I1201 09:53:39.586977 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755015a8-ffb0-477f-bcbe-2db30cae8c43","Type":"ContainerStarted","Data":"9902527ed2ee7342da290592f5bea80ee11f6b98113621f45419d7f2427c53f3"} Dec 01 09:53:39 crc kubenswrapper[4933]: I1201 09:53:39.587081 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755015a8-ffb0-477f-bcbe-2db30cae8c43","Type":"ContainerStarted","Data":"ce07a034a6eae6beb5627d23384e5721cb181f15491c9074bbb99050bed488d1"} Dec 01 09:53:41 crc kubenswrapper[4933]: I1201 09:53:41.629294 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755015a8-ffb0-477f-bcbe-2db30cae8c43","Type":"ContainerStarted","Data":"7f84b076098b226266da4bf9d6d8dcd18e3329b2c80be5b923a81ff3b57142b3"} Dec 01 09:53:41 crc kubenswrapper[4933]: I1201 09:53:41.629678 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="ceilometer-central-agent" containerID="cri-o://cb867ddfc27dc847003421c7ddcfee0c8faf414fd6752eb7f7e8bcf07fed22f5" gracePeriod=30 Dec 01 09:53:41 crc kubenswrapper[4933]: I1201 09:53:41.629799 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="sg-core" containerID="cri-o://9902527ed2ee7342da290592f5bea80ee11f6b98113621f45419d7f2427c53f3" gracePeriod=30 Dec 01 09:53:41 crc kubenswrapper[4933]: I1201 09:53:41.630484 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 09:53:41 crc kubenswrapper[4933]: I1201 09:53:41.629902 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="ceilometer-notification-agent" containerID="cri-o://ce07a034a6eae6beb5627d23384e5721cb181f15491c9074bbb99050bed488d1" gracePeriod=30 Dec 01 09:53:41 crc kubenswrapper[4933]: I1201 09:53:41.629851 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="proxy-httpd" containerID="cri-o://7f84b076098b226266da4bf9d6d8dcd18e3329b2c80be5b923a81ff3b57142b3" gracePeriod=30 Dec 01 09:53:41 crc kubenswrapper[4933]: I1201 09:53:41.669574 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.624969767 podStartE2EDuration="11.669542764s" podCreationTimestamp="2025-12-01 09:53:30 +0000 UTC" firstStartedPulling="2025-12-01 09:53:31.639649636 +0000 UTC m=+1302.281373251" lastFinishedPulling="2025-12-01 09:53:40.684222633 +0000 UTC m=+1311.325946248" observedRunningTime="2025-12-01 09:53:41.657661873 +0000 UTC m=+1312.299385488" watchObservedRunningTime="2025-12-01 09:53:41.669542764 +0000 UTC m=+1312.311266379" Dec 01 09:53:42 crc kubenswrapper[4933]: I1201 09:53:42.644338 4933 generic.go:334] "Generic (PLEG): container finished" podID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerID="7f84b076098b226266da4bf9d6d8dcd18e3329b2c80be5b923a81ff3b57142b3" exitCode=0 Dec 01 09:53:42 crc kubenswrapper[4933]: I1201 09:53:42.644919 4933 generic.go:334] "Generic (PLEG): container finished" podID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerID="9902527ed2ee7342da290592f5bea80ee11f6b98113621f45419d7f2427c53f3" exitCode=2 Dec 01 09:53:42 crc kubenswrapper[4933]: I1201 09:53:42.644347 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755015a8-ffb0-477f-bcbe-2db30cae8c43","Type":"ContainerDied","Data":"7f84b076098b226266da4bf9d6d8dcd18e3329b2c80be5b923a81ff3b57142b3"} Dec 01 09:53:42 crc kubenswrapper[4933]: I1201 09:53:42.644996 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755015a8-ffb0-477f-bcbe-2db30cae8c43","Type":"ContainerDied","Data":"9902527ed2ee7342da290592f5bea80ee11f6b98113621f45419d7f2427c53f3"} Dec 01 09:53:42 crc kubenswrapper[4933]: I1201 09:53:42.645032 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755015a8-ffb0-477f-bcbe-2db30cae8c43","Type":"ContainerDied","Data":"ce07a034a6eae6beb5627d23384e5721cb181f15491c9074bbb99050bed488d1"} Dec 01 09:53:42 crc kubenswrapper[4933]: I1201 09:53:42.644936 4933 generic.go:334] "Generic (PLEG): container finished" podID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerID="ce07a034a6eae6beb5627d23384e5721cb181f15491c9074bbb99050bed488d1" exitCode=0 Dec 01 09:53:44 crc kubenswrapper[4933]: E1201 09:53:44.387437 4933 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ffcfb41_8086_4e28_b88a_da47dd38a844.slice/crio-conmon-e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0.scope\": RecentStats: unable to find data in memory cache]" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.510497 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.673626 4933 generic.go:334] "Generic (PLEG): container finished" podID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerID="e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0" exitCode=137 Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.673705 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6775f97bdb-vs7m8" event={"ID":"6ffcfb41-8086-4e28-b88a-da47dd38a844","Type":"ContainerDied","Data":"e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0"} Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.673745 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6775f97bdb-vs7m8" event={"ID":"6ffcfb41-8086-4e28-b88a-da47dd38a844","Type":"ContainerDied","Data":"e4f29358548baa73d85fb9c708be57e634954ae831577c5b450d6fa02614debd"} Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.673769 4933 scope.go:117] "RemoveContainer" containerID="4939aef99ded42615cef4e9d95fccaaae2f1009f74245945b7589fb343808301" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.674000 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6775f97bdb-vs7m8" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.685335 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ffcfb41-8086-4e28-b88a-da47dd38a844-logs\") pod \"6ffcfb41-8086-4e28-b88a-da47dd38a844\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.685472 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-horizon-secret-key\") pod \"6ffcfb41-8086-4e28-b88a-da47dd38a844\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.685527 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkfjf\" (UniqueName: \"kubernetes.io/projected/6ffcfb41-8086-4e28-b88a-da47dd38a844-kube-api-access-hkfjf\") pod \"6ffcfb41-8086-4e28-b88a-da47dd38a844\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.685610 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ffcfb41-8086-4e28-b88a-da47dd38a844-scripts\") pod \"6ffcfb41-8086-4e28-b88a-da47dd38a844\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.685715 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-combined-ca-bundle\") pod \"6ffcfb41-8086-4e28-b88a-da47dd38a844\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.685744 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ffcfb41-8086-4e28-b88a-da47dd38a844-config-data\") pod \"6ffcfb41-8086-4e28-b88a-da47dd38a844\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.685889 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-horizon-tls-certs\") pod \"6ffcfb41-8086-4e28-b88a-da47dd38a844\" (UID: \"6ffcfb41-8086-4e28-b88a-da47dd38a844\") " Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.686899 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ffcfb41-8086-4e28-b88a-da47dd38a844-logs" (OuterVolumeSpecName: "logs") pod "6ffcfb41-8086-4e28-b88a-da47dd38a844" (UID: "6ffcfb41-8086-4e28-b88a-da47dd38a844"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.704418 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ffcfb41-8086-4e28-b88a-da47dd38a844-kube-api-access-hkfjf" (OuterVolumeSpecName: "kube-api-access-hkfjf") pod "6ffcfb41-8086-4e28-b88a-da47dd38a844" (UID: "6ffcfb41-8086-4e28-b88a-da47dd38a844"). InnerVolumeSpecName "kube-api-access-hkfjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.707360 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6ffcfb41-8086-4e28-b88a-da47dd38a844" (UID: "6ffcfb41-8086-4e28-b88a-da47dd38a844"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.730652 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ffcfb41-8086-4e28-b88a-da47dd38a844-config-data" (OuterVolumeSpecName: "config-data") pod "6ffcfb41-8086-4e28-b88a-da47dd38a844" (UID: "6ffcfb41-8086-4e28-b88a-da47dd38a844"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.731213 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ffcfb41-8086-4e28-b88a-da47dd38a844-scripts" (OuterVolumeSpecName: "scripts") pod "6ffcfb41-8086-4e28-b88a-da47dd38a844" (UID: "6ffcfb41-8086-4e28-b88a-da47dd38a844"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.738476 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ffcfb41-8086-4e28-b88a-da47dd38a844" (UID: "6ffcfb41-8086-4e28-b88a-da47dd38a844"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.773578 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "6ffcfb41-8086-4e28-b88a-da47dd38a844" (UID: "6ffcfb41-8086-4e28-b88a-da47dd38a844"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.788612 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ffcfb41-8086-4e28-b88a-da47dd38a844-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.788652 4933 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.788665 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkfjf\" (UniqueName: \"kubernetes.io/projected/6ffcfb41-8086-4e28-b88a-da47dd38a844-kube-api-access-hkfjf\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.788674 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ffcfb41-8086-4e28-b88a-da47dd38a844-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.788684 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.788693 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ffcfb41-8086-4e28-b88a-da47dd38a844-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.788703 4933 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ffcfb41-8086-4e28-b88a-da47dd38a844-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.907137 4933 scope.go:117] "RemoveContainer" containerID="e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.932572 4933 scope.go:117] "RemoveContainer" containerID="4939aef99ded42615cef4e9d95fccaaae2f1009f74245945b7589fb343808301" Dec 01 09:53:44 crc kubenswrapper[4933]: E1201 09:53:44.933203 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4939aef99ded42615cef4e9d95fccaaae2f1009f74245945b7589fb343808301\": container with ID starting with 4939aef99ded42615cef4e9d95fccaaae2f1009f74245945b7589fb343808301 not found: ID does not exist" containerID="4939aef99ded42615cef4e9d95fccaaae2f1009f74245945b7589fb343808301" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.933253 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4939aef99ded42615cef4e9d95fccaaae2f1009f74245945b7589fb343808301"} err="failed to get container status \"4939aef99ded42615cef4e9d95fccaaae2f1009f74245945b7589fb343808301\": rpc error: code = NotFound desc = could not find container \"4939aef99ded42615cef4e9d95fccaaae2f1009f74245945b7589fb343808301\": container with ID starting with 4939aef99ded42615cef4e9d95fccaaae2f1009f74245945b7589fb343808301 not found: ID does not exist" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.933284 4933 scope.go:117] "RemoveContainer" containerID="e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0" Dec 01 09:53:44 crc kubenswrapper[4933]: E1201 09:53:44.933946 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0\": container with ID starting with e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0 not found: ID does not exist" containerID="e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0" Dec 01 09:53:44 crc kubenswrapper[4933]: I1201 09:53:44.934009 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0"} err="failed to get container status \"e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0\": rpc error: code = NotFound desc = could not find container \"e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0\": container with ID starting with e186129627d1bc3cfc463ba580d1136dd71ee66cf9a2f5a3016cd2790cb63bb0 not found: ID does not exist" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.013127 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6775f97bdb-vs7m8"] Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.027545 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6775f97bdb-vs7m8"] Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.515679 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.605241 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-sg-core-conf-yaml\") pod \"755015a8-ffb0-477f-bcbe-2db30cae8c43\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.605489 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvwrn\" (UniqueName: \"kubernetes.io/projected/755015a8-ffb0-477f-bcbe-2db30cae8c43-kube-api-access-hvwrn\") pod \"755015a8-ffb0-477f-bcbe-2db30cae8c43\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.605535 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-combined-ca-bundle\") pod \"755015a8-ffb0-477f-bcbe-2db30cae8c43\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.606250 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-config-data\") pod \"755015a8-ffb0-477f-bcbe-2db30cae8c43\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.606320 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-scripts\") pod \"755015a8-ffb0-477f-bcbe-2db30cae8c43\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.606374 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755015a8-ffb0-477f-bcbe-2db30cae8c43-log-httpd\") pod \"755015a8-ffb0-477f-bcbe-2db30cae8c43\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.606490 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755015a8-ffb0-477f-bcbe-2db30cae8c43-run-httpd\") pod \"755015a8-ffb0-477f-bcbe-2db30cae8c43\" (UID: \"755015a8-ffb0-477f-bcbe-2db30cae8c43\") " Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.606938 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/755015a8-ffb0-477f-bcbe-2db30cae8c43-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "755015a8-ffb0-477f-bcbe-2db30cae8c43" (UID: "755015a8-ffb0-477f-bcbe-2db30cae8c43"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.607094 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/755015a8-ffb0-477f-bcbe-2db30cae8c43-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "755015a8-ffb0-477f-bcbe-2db30cae8c43" (UID: "755015a8-ffb0-477f-bcbe-2db30cae8c43"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.607681 4933 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755015a8-ffb0-477f-bcbe-2db30cae8c43-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.607712 4933 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755015a8-ffb0-477f-bcbe-2db30cae8c43-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.613354 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/755015a8-ffb0-477f-bcbe-2db30cae8c43-kube-api-access-hvwrn" (OuterVolumeSpecName: "kube-api-access-hvwrn") pod "755015a8-ffb0-477f-bcbe-2db30cae8c43" (UID: "755015a8-ffb0-477f-bcbe-2db30cae8c43"). InnerVolumeSpecName "kube-api-access-hvwrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.617273 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-scripts" (OuterVolumeSpecName: "scripts") pod "755015a8-ffb0-477f-bcbe-2db30cae8c43" (UID: "755015a8-ffb0-477f-bcbe-2db30cae8c43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.635378 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "755015a8-ffb0-477f-bcbe-2db30cae8c43" (UID: "755015a8-ffb0-477f-bcbe-2db30cae8c43"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.681962 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" path="/var/lib/kubelet/pods/6ffcfb41-8086-4e28-b88a-da47dd38a844/volumes" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.686094 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "755015a8-ffb0-477f-bcbe-2db30cae8c43" (UID: "755015a8-ffb0-477f-bcbe-2db30cae8c43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.692128 4933 generic.go:334] "Generic (PLEG): container finished" podID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerID="cb867ddfc27dc847003421c7ddcfee0c8faf414fd6752eb7f7e8bcf07fed22f5" exitCode=0 Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.692240 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.692223 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755015a8-ffb0-477f-bcbe-2db30cae8c43","Type":"ContainerDied","Data":"cb867ddfc27dc847003421c7ddcfee0c8faf414fd6752eb7f7e8bcf07fed22f5"} Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.692502 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755015a8-ffb0-477f-bcbe-2db30cae8c43","Type":"ContainerDied","Data":"213a80db770065a8a6496dcc935be1f50e20bb05a1b78357e2f3ddc47ece7e34"} Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.692536 4933 scope.go:117] "RemoveContainer" containerID="7f84b076098b226266da4bf9d6d8dcd18e3329b2c80be5b923a81ff3b57142b3" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.709410 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvwrn\" (UniqueName: \"kubernetes.io/projected/755015a8-ffb0-477f-bcbe-2db30cae8c43-kube-api-access-hvwrn\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.709454 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.709463 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.709473 4933 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.719700 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-config-data" (OuterVolumeSpecName: "config-data") pod "755015a8-ffb0-477f-bcbe-2db30cae8c43" (UID: "755015a8-ffb0-477f-bcbe-2db30cae8c43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.721536 4933 scope.go:117] "RemoveContainer" containerID="9902527ed2ee7342da290592f5bea80ee11f6b98113621f45419d7f2427c53f3" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.758109 4933 scope.go:117] "RemoveContainer" containerID="ce07a034a6eae6beb5627d23384e5721cb181f15491c9074bbb99050bed488d1" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.783298 4933 scope.go:117] "RemoveContainer" containerID="cb867ddfc27dc847003421c7ddcfee0c8faf414fd6752eb7f7e8bcf07fed22f5" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.806716 4933 scope.go:117] "RemoveContainer" containerID="7f84b076098b226266da4bf9d6d8dcd18e3329b2c80be5b923a81ff3b57142b3" Dec 01 09:53:45 crc kubenswrapper[4933]: E1201 09:53:45.807614 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f84b076098b226266da4bf9d6d8dcd18e3329b2c80be5b923a81ff3b57142b3\": container with ID starting with 7f84b076098b226266da4bf9d6d8dcd18e3329b2c80be5b923a81ff3b57142b3 not found: ID does not exist" containerID="7f84b076098b226266da4bf9d6d8dcd18e3329b2c80be5b923a81ff3b57142b3" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.807686 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f84b076098b226266da4bf9d6d8dcd18e3329b2c80be5b923a81ff3b57142b3"} err="failed to get container status \"7f84b076098b226266da4bf9d6d8dcd18e3329b2c80be5b923a81ff3b57142b3\": rpc error: code = NotFound desc = could not find container \"7f84b076098b226266da4bf9d6d8dcd18e3329b2c80be5b923a81ff3b57142b3\": container with ID starting with 7f84b076098b226266da4bf9d6d8dcd18e3329b2c80be5b923a81ff3b57142b3 not found: ID does not exist" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.807733 4933 scope.go:117] "RemoveContainer" containerID="9902527ed2ee7342da290592f5bea80ee11f6b98113621f45419d7f2427c53f3" Dec 01 09:53:45 crc kubenswrapper[4933]: E1201 09:53:45.808200 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9902527ed2ee7342da290592f5bea80ee11f6b98113621f45419d7f2427c53f3\": container with ID starting with 9902527ed2ee7342da290592f5bea80ee11f6b98113621f45419d7f2427c53f3 not found: ID does not exist" containerID="9902527ed2ee7342da290592f5bea80ee11f6b98113621f45419d7f2427c53f3" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.808226 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9902527ed2ee7342da290592f5bea80ee11f6b98113621f45419d7f2427c53f3"} err="failed to get container status \"9902527ed2ee7342da290592f5bea80ee11f6b98113621f45419d7f2427c53f3\": rpc error: code = NotFound desc = could not find container \"9902527ed2ee7342da290592f5bea80ee11f6b98113621f45419d7f2427c53f3\": container with ID starting with 9902527ed2ee7342da290592f5bea80ee11f6b98113621f45419d7f2427c53f3 not found: ID does not exist" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.808246 4933 scope.go:117] "RemoveContainer" containerID="ce07a034a6eae6beb5627d23384e5721cb181f15491c9074bbb99050bed488d1" Dec 01 09:53:45 crc kubenswrapper[4933]: E1201 09:53:45.808701 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce07a034a6eae6beb5627d23384e5721cb181f15491c9074bbb99050bed488d1\": container with ID starting with ce07a034a6eae6beb5627d23384e5721cb181f15491c9074bbb99050bed488d1 not found: ID does not exist" containerID="ce07a034a6eae6beb5627d23384e5721cb181f15491c9074bbb99050bed488d1" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.808736 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce07a034a6eae6beb5627d23384e5721cb181f15491c9074bbb99050bed488d1"} err="failed to get container status \"ce07a034a6eae6beb5627d23384e5721cb181f15491c9074bbb99050bed488d1\": rpc error: code = NotFound desc = could not find container \"ce07a034a6eae6beb5627d23384e5721cb181f15491c9074bbb99050bed488d1\": container with ID starting with ce07a034a6eae6beb5627d23384e5721cb181f15491c9074bbb99050bed488d1 not found: ID does not exist" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.808756 4933 scope.go:117] "RemoveContainer" containerID="cb867ddfc27dc847003421c7ddcfee0c8faf414fd6752eb7f7e8bcf07fed22f5" Dec 01 09:53:45 crc kubenswrapper[4933]: E1201 09:53:45.809194 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb867ddfc27dc847003421c7ddcfee0c8faf414fd6752eb7f7e8bcf07fed22f5\": container with ID starting with cb867ddfc27dc847003421c7ddcfee0c8faf414fd6752eb7f7e8bcf07fed22f5 not found: ID does not exist" containerID="cb867ddfc27dc847003421c7ddcfee0c8faf414fd6752eb7f7e8bcf07fed22f5" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.809233 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb867ddfc27dc847003421c7ddcfee0c8faf414fd6752eb7f7e8bcf07fed22f5"} err="failed to get container status \"cb867ddfc27dc847003421c7ddcfee0c8faf414fd6752eb7f7e8bcf07fed22f5\": rpc error: code = NotFound desc = could not find container \"cb867ddfc27dc847003421c7ddcfee0c8faf414fd6752eb7f7e8bcf07fed22f5\": container with ID starting with cb867ddfc27dc847003421c7ddcfee0c8faf414fd6752eb7f7e8bcf07fed22f5 not found: ID does not exist" Dec 01 09:53:45 crc kubenswrapper[4933]: I1201 09:53:45.811892 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755015a8-ffb0-477f-bcbe-2db30cae8c43-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.035728 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.047356 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.065126 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:46 crc kubenswrapper[4933]: E1201 09:53:46.066017 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.066100 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" Dec 01 09:53:46 crc kubenswrapper[4933]: E1201 09:53:46.066196 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="proxy-httpd" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.066267 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="proxy-httpd" Dec 01 09:53:46 crc kubenswrapper[4933]: E1201 09:53:46.066374 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="ceilometer-central-agent" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.066436 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="ceilometer-central-agent" Dec 01 09:53:46 crc kubenswrapper[4933]: E1201 09:53:46.066503 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="sg-core" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.066556 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="sg-core" Dec 01 09:53:46 crc kubenswrapper[4933]: E1201 09:53:46.066622 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="ceilometer-notification-agent" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.066677 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="ceilometer-notification-agent" Dec 01 09:53:46 crc kubenswrapper[4933]: E1201 09:53:46.066749 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon-log" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.066844 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon-log" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.067186 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.067271 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="sg-core" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.067366 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="ceilometer-notification-agent" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.067464 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon-log" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.067537 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="proxy-httpd" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.067605 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" containerName="ceilometer-central-agent" Dec 01 09:53:46 crc kubenswrapper[4933]: E1201 09:53:46.068008 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.068075 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.068356 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ffcfb41-8086-4e28-b88a-da47dd38a844" containerName="horizon" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.070612 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.075065 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.075663 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.083383 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.219893 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.219965 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-log-httpd\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.219994 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-run-httpd\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.220114 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-config-data\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.220201 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.220243 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-scripts\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.220421 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf5gg\" (UniqueName: \"kubernetes.io/projected/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-kube-api-access-xf5gg\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.323197 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-run-httpd\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.323870 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-config-data\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.324074 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.324141 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-scripts\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.324194 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-run-httpd\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.324396 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf5gg\" (UniqueName: \"kubernetes.io/projected/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-kube-api-access-xf5gg\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.324817 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.324859 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-log-httpd\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.325642 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-log-httpd\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.329816 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-config-data\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.330131 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.331534 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-scripts\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.332205 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.350784 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf5gg\" (UniqueName: \"kubernetes.io/projected/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-kube-api-access-xf5gg\") pod \"ceilometer-0\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.394569 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:53:46 crc kubenswrapper[4933]: I1201 09:53:46.857810 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:53:46 crc kubenswrapper[4933]: W1201 09:53:46.862775 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda09e8e0e_3ffb_4a17_8291_eafcf23617bf.slice/crio-50459de779d73acea0ff07833c62bdae6cfd0bd051bdb1411c5e08dfdf28ac79 WatchSource:0}: Error finding container 50459de779d73acea0ff07833c62bdae6cfd0bd051bdb1411c5e08dfdf28ac79: Status 404 returned error can't find the container with id 50459de779d73acea0ff07833c62bdae6cfd0bd051bdb1411c5e08dfdf28ac79 Dec 01 09:53:47 crc kubenswrapper[4933]: I1201 09:53:47.681829 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="755015a8-ffb0-477f-bcbe-2db30cae8c43" path="/var/lib/kubelet/pods/755015a8-ffb0-477f-bcbe-2db30cae8c43/volumes" Dec 01 09:53:47 crc kubenswrapper[4933]: I1201 09:53:47.728636 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09e8e0e-3ffb-4a17-8291-eafcf23617bf","Type":"ContainerStarted","Data":"50459de779d73acea0ff07833c62bdae6cfd0bd051bdb1411c5e08dfdf28ac79"} Dec 01 09:53:48 crc kubenswrapper[4933]: I1201 09:53:48.743775 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09e8e0e-3ffb-4a17-8291-eafcf23617bf","Type":"ContainerStarted","Data":"1b7d3f91fd3ad09220d3635f245cc99964d038ea4a5c4b0e5a2f773cb2681ee1"} Dec 01 09:53:48 crc kubenswrapper[4933]: I1201 09:53:48.744381 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09e8e0e-3ffb-4a17-8291-eafcf23617bf","Type":"ContainerStarted","Data":"3380c79a60d7c89c4524245ba20f321ac4b82a370ad9261786bf967b0913a7e5"} Dec 01 09:53:49 crc kubenswrapper[4933]: I1201 09:53:49.761424 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09e8e0e-3ffb-4a17-8291-eafcf23617bf","Type":"ContainerStarted","Data":"967a949f1dff0ebf7c9c52a0b5132457859b181b6ad00003b0f0e651a664f7bb"} Dec 01 09:53:50 crc kubenswrapper[4933]: I1201 09:53:50.777177 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09e8e0e-3ffb-4a17-8291-eafcf23617bf","Type":"ContainerStarted","Data":"da19e55062509f01cbc6d46bf589d3db4636e7ca146a1fb1e139f4516d53c366"} Dec 01 09:53:50 crc kubenswrapper[4933]: I1201 09:53:50.777623 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 09:53:50 crc kubenswrapper[4933]: I1201 09:53:50.819408 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.356441265 podStartE2EDuration="4.819382164s" podCreationTimestamp="2025-12-01 09:53:46 +0000 UTC" firstStartedPulling="2025-12-01 09:53:46.865969214 +0000 UTC m=+1317.507692839" lastFinishedPulling="2025-12-01 09:53:50.328910123 +0000 UTC m=+1320.970633738" observedRunningTime="2025-12-01 09:53:50.809063741 +0000 UTC m=+1321.450787356" watchObservedRunningTime="2025-12-01 09:53:50.819382164 +0000 UTC m=+1321.461105789" Dec 01 09:53:51 crc kubenswrapper[4933]: I1201 09:53:51.806379 4933 generic.go:334] "Generic (PLEG): container finished" podID="9427df5f-7233-4b84-b1c7-1567a9b686df" containerID="2008d28709e4d35910b136e95a746ebcf6b19cd0952ac1cf45cfe90e36235d13" exitCode=0 Dec 01 09:53:51 crc kubenswrapper[4933]: I1201 09:53:51.806828 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xzlnq" event={"ID":"9427df5f-7233-4b84-b1c7-1567a9b686df","Type":"ContainerDied","Data":"2008d28709e4d35910b136e95a746ebcf6b19cd0952ac1cf45cfe90e36235d13"} Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.203086 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.301363 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-config-data\") pod \"9427df5f-7233-4b84-b1c7-1567a9b686df\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.301831 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-combined-ca-bundle\") pod \"9427df5f-7233-4b84-b1c7-1567a9b686df\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.302453 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-scripts\") pod \"9427df5f-7233-4b84-b1c7-1567a9b686df\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.302525 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrk7f\" (UniqueName: \"kubernetes.io/projected/9427df5f-7233-4b84-b1c7-1567a9b686df-kube-api-access-hrk7f\") pod \"9427df5f-7233-4b84-b1c7-1567a9b686df\" (UID: \"9427df5f-7233-4b84-b1c7-1567a9b686df\") " Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.325838 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-scripts" (OuterVolumeSpecName: "scripts") pod "9427df5f-7233-4b84-b1c7-1567a9b686df" (UID: "9427df5f-7233-4b84-b1c7-1567a9b686df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.325888 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9427df5f-7233-4b84-b1c7-1567a9b686df-kube-api-access-hrk7f" (OuterVolumeSpecName: "kube-api-access-hrk7f") pod "9427df5f-7233-4b84-b1c7-1567a9b686df" (UID: "9427df5f-7233-4b84-b1c7-1567a9b686df"). InnerVolumeSpecName "kube-api-access-hrk7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.344889 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9427df5f-7233-4b84-b1c7-1567a9b686df" (UID: "9427df5f-7233-4b84-b1c7-1567a9b686df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.351318 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-config-data" (OuterVolumeSpecName: "config-data") pod "9427df5f-7233-4b84-b1c7-1567a9b686df" (UID: "9427df5f-7233-4b84-b1c7-1567a9b686df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.405722 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.405774 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.405788 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrk7f\" (UniqueName: \"kubernetes.io/projected/9427df5f-7233-4b84-b1c7-1567a9b686df-kube-api-access-hrk7f\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.405806 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9427df5f-7233-4b84-b1c7-1567a9b686df-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.830842 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xzlnq" event={"ID":"9427df5f-7233-4b84-b1c7-1567a9b686df","Type":"ContainerDied","Data":"716ce5f0d0de0c98530a0f2339a99d952430c289d1989dc39ef8df00ddeacb03"} Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.830911 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="716ce5f0d0de0c98530a0f2339a99d952430c289d1989dc39ef8df00ddeacb03" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.830959 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xzlnq" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.946319 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 01 09:53:53 crc kubenswrapper[4933]: E1201 09:53:53.947001 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9427df5f-7233-4b84-b1c7-1567a9b686df" containerName="nova-cell0-conductor-db-sync" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.947032 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="9427df5f-7233-4b84-b1c7-1567a9b686df" containerName="nova-cell0-conductor-db-sync" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.947282 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="9427df5f-7233-4b84-b1c7-1567a9b686df" containerName="nova-cell0-conductor-db-sync" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.948160 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.950754 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m25sm" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.951258 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 01 09:53:53 crc kubenswrapper[4933]: I1201 09:53:53.961815 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 01 09:53:54 crc kubenswrapper[4933]: I1201 09:53:54.124487 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9107505-ec32-479e-b76e-1ffa605a3bfb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b9107505-ec32-479e-b76e-1ffa605a3bfb\") " pod="openstack/nova-cell0-conductor-0" Dec 01 09:53:54 crc kubenswrapper[4933]: I1201 09:53:54.124609 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9107505-ec32-479e-b76e-1ffa605a3bfb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b9107505-ec32-479e-b76e-1ffa605a3bfb\") " pod="openstack/nova-cell0-conductor-0" Dec 01 09:53:54 crc kubenswrapper[4933]: I1201 09:53:54.124651 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9xcd\" (UniqueName: \"kubernetes.io/projected/b9107505-ec32-479e-b76e-1ffa605a3bfb-kube-api-access-c9xcd\") pod \"nova-cell0-conductor-0\" (UID: \"b9107505-ec32-479e-b76e-1ffa605a3bfb\") " pod="openstack/nova-cell0-conductor-0" Dec 01 09:53:54 crc kubenswrapper[4933]: I1201 09:53:54.226685 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9107505-ec32-479e-b76e-1ffa605a3bfb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b9107505-ec32-479e-b76e-1ffa605a3bfb\") " pod="openstack/nova-cell0-conductor-0" Dec 01 09:53:54 crc kubenswrapper[4933]: I1201 09:53:54.227314 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9107505-ec32-479e-b76e-1ffa605a3bfb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b9107505-ec32-479e-b76e-1ffa605a3bfb\") " pod="openstack/nova-cell0-conductor-0" Dec 01 09:53:54 crc kubenswrapper[4933]: I1201 09:53:54.227368 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9xcd\" (UniqueName: \"kubernetes.io/projected/b9107505-ec32-479e-b76e-1ffa605a3bfb-kube-api-access-c9xcd\") pod \"nova-cell0-conductor-0\" (UID: \"b9107505-ec32-479e-b76e-1ffa605a3bfb\") " pod="openstack/nova-cell0-conductor-0" Dec 01 09:53:54 crc kubenswrapper[4933]: I1201 09:53:54.242655 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9107505-ec32-479e-b76e-1ffa605a3bfb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b9107505-ec32-479e-b76e-1ffa605a3bfb\") " pod="openstack/nova-cell0-conductor-0" Dec 01 09:53:54 crc kubenswrapper[4933]: I1201 09:53:54.243687 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9107505-ec32-479e-b76e-1ffa605a3bfb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b9107505-ec32-479e-b76e-1ffa605a3bfb\") " pod="openstack/nova-cell0-conductor-0" Dec 01 09:53:54 crc kubenswrapper[4933]: I1201 09:53:54.246697 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9xcd\" (UniqueName: \"kubernetes.io/projected/b9107505-ec32-479e-b76e-1ffa605a3bfb-kube-api-access-c9xcd\") pod \"nova-cell0-conductor-0\" (UID: \"b9107505-ec32-479e-b76e-1ffa605a3bfb\") " pod="openstack/nova-cell0-conductor-0" Dec 01 09:53:54 crc kubenswrapper[4933]: I1201 09:53:54.275320 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 01 09:53:54 crc kubenswrapper[4933]: I1201 09:53:54.772537 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 01 09:53:54 crc kubenswrapper[4933]: I1201 09:53:54.842728 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b9107505-ec32-479e-b76e-1ffa605a3bfb","Type":"ContainerStarted","Data":"0bf5b7e61f9e7ccf65c78fc8494989d22635e7ad73e72168d1909c80f4ee1ddf"} Dec 01 09:53:55 crc kubenswrapper[4933]: I1201 09:53:55.857114 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b9107505-ec32-479e-b76e-1ffa605a3bfb","Type":"ContainerStarted","Data":"82b88b91b225839bdff64a527dbefd5c7902096f830443cb14b3751ac0b98e27"} Dec 01 09:53:55 crc kubenswrapper[4933]: I1201 09:53:55.857662 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 01 09:53:55 crc kubenswrapper[4933]: I1201 09:53:55.882085 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.882062655 podStartE2EDuration="2.882062655s" podCreationTimestamp="2025-12-01 09:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:53:55.87819822 +0000 UTC m=+1326.519921845" watchObservedRunningTime="2025-12-01 09:53:55.882062655 +0000 UTC m=+1326.523786270" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.303867 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.757179 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7mcq4"] Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.759575 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.762546 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.762850 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.785119 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7mcq4"] Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.853587 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrhk7\" (UniqueName: \"kubernetes.io/projected/cd09dcd1-e9a3-40dd-9497-11d652bad925-kube-api-access-xrhk7\") pod \"nova-cell0-cell-mapping-7mcq4\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.853674 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7mcq4\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.853724 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-scripts\") pod \"nova-cell0-cell-mapping-7mcq4\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.853752 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-config-data\") pod \"nova-cell0-cell-mapping-7mcq4\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.956160 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrhk7\" (UniqueName: \"kubernetes.io/projected/cd09dcd1-e9a3-40dd-9497-11d652bad925-kube-api-access-xrhk7\") pod \"nova-cell0-cell-mapping-7mcq4\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.956365 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7mcq4\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.956448 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-scripts\") pod \"nova-cell0-cell-mapping-7mcq4\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.956486 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-config-data\") pod \"nova-cell0-cell-mapping-7mcq4\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.978779 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-config-data\") pod \"nova-cell0-cell-mapping-7mcq4\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.989369 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.991694 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:53:59 crc kubenswrapper[4933]: I1201 09:53:59.996058 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-scripts\") pod \"nova-cell0-cell-mapping-7mcq4\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:53:59.998622 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.019689 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7mcq4\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.026528 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.035095 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrhk7\" (UniqueName: \"kubernetes.io/projected/cd09dcd1-e9a3-40dd-9497-11d652bad925-kube-api-access-xrhk7\") pod \"nova-cell0-cell-mapping-7mcq4\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.061374 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.063652 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.068189 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.104027 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.138510 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.159631 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " pod="openstack/nova-api-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.159706 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbnhz\" (UniqueName: \"kubernetes.io/projected/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-kube-api-access-lbnhz\") pod \"nova-api-0\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " pod="openstack/nova-api-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.159752 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-config-data\") pod \"nova-api-0\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " pod="openstack/nova-api-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.159813 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.159888 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-logs\") pod \"nova-api-0\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " pod="openstack/nova-api-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.159925 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btghc\" (UniqueName: \"kubernetes.io/projected/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-kube-api-access-btghc\") pod \"nova-scheduler-0\" (UID: \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.159962 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-config-data\") pod \"nova-scheduler-0\" (UID: \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.225014 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.226854 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.231118 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.255405 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.269444 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbnhz\" (UniqueName: \"kubernetes.io/projected/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-kube-api-access-lbnhz\") pod \"nova-api-0\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " pod="openstack/nova-api-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.269516 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-config-data\") pod \"nova-api-0\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " pod="openstack/nova-api-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.269570 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.269628 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-logs\") pod \"nova-api-0\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " pod="openstack/nova-api-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.269653 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btghc\" (UniqueName: \"kubernetes.io/projected/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-kube-api-access-btghc\") pod \"nova-scheduler-0\" (UID: \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.269678 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-config-data\") pod \"nova-scheduler-0\" (UID: \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.269729 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " pod="openstack/nova-api-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.272548 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-logs\") pod \"nova-api-0\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " pod="openstack/nova-api-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.281152 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.289633 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.294656 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.301743 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-config-data\") pod \"nova-api-0\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " pod="openstack/nova-api-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.303033 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.303151 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-config-data\") pod \"nova-scheduler-0\" (UID: \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.303545 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " pod="openstack/nova-api-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.314371 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btghc\" (UniqueName: \"kubernetes.io/projected/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-kube-api-access-btghc\") pod \"nova-scheduler-0\" (UID: \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.328440 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbnhz\" (UniqueName: \"kubernetes.io/projected/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-kube-api-access-lbnhz\") pod \"nova-api-0\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " pod="openstack/nova-api-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.338197 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.379357 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qptzs\" (UniqueName: \"kubernetes.io/projected/caa4bf9b-bdd7-4f39-aead-94858835d5f1-kube-api-access-qptzs\") pod \"nova-cell1-novncproxy-0\" (UID: \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.386790 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.389922 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa4bf9b-bdd7-4f39-aead-94858835d5f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.390015 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06310ed0-6317-4a9f-a538-5813d66bc5cd-config-data\") pod \"nova-metadata-0\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.390062 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06310ed0-6317-4a9f-a538-5813d66bc5cd-logs\") pod \"nova-metadata-0\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.390434 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl9jn\" (UniqueName: \"kubernetes.io/projected/06310ed0-6317-4a9f-a538-5813d66bc5cd-kube-api-access-fl9jn\") pod \"nova-metadata-0\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.391257 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caa4bf9b-bdd7-4f39-aead-94858835d5f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.391544 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06310ed0-6317-4a9f-a538-5813d66bc5cd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.442514 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-tgfgn"] Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.456160 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.464873 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.466609 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-tgfgn"] Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.494516 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qptzs\" (UniqueName: \"kubernetes.io/projected/caa4bf9b-bdd7-4f39-aead-94858835d5f1-kube-api-access-qptzs\") pod \"nova-cell1-novncproxy-0\" (UID: \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.494623 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa4bf9b-bdd7-4f39-aead-94858835d5f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.494667 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06310ed0-6317-4a9f-a538-5813d66bc5cd-config-data\") pod \"nova-metadata-0\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.494690 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06310ed0-6317-4a9f-a538-5813d66bc5cd-logs\") pod \"nova-metadata-0\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.494718 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl9jn\" (UniqueName: \"kubernetes.io/projected/06310ed0-6317-4a9f-a538-5813d66bc5cd-kube-api-access-fl9jn\") pod \"nova-metadata-0\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.494814 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caa4bf9b-bdd7-4f39-aead-94858835d5f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.494872 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06310ed0-6317-4a9f-a538-5813d66bc5cd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.502706 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caa4bf9b-bdd7-4f39-aead-94858835d5f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.503641 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06310ed0-6317-4a9f-a538-5813d66bc5cd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.505391 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06310ed0-6317-4a9f-a538-5813d66bc5cd-logs\") pod \"nova-metadata-0\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.506028 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06310ed0-6317-4a9f-a538-5813d66bc5cd-config-data\") pod \"nova-metadata-0\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.510509 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa4bf9b-bdd7-4f39-aead-94858835d5f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.516813 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qptzs\" (UniqueName: \"kubernetes.io/projected/caa4bf9b-bdd7-4f39-aead-94858835d5f1-kube-api-access-qptzs\") pod \"nova-cell1-novncproxy-0\" (UID: \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.528453 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl9jn\" (UniqueName: \"kubernetes.io/projected/06310ed0-6317-4a9f-a538-5813d66bc5cd-kube-api-access-fl9jn\") pod \"nova-metadata-0\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.600207 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-dns-svc\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.600415 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-config\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.600467 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.600542 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f624\" (UniqueName: \"kubernetes.io/projected/cff60748-f83c-489e-a7fc-19fb4473f029-kube-api-access-6f624\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.600566 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.600587 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.704098 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-config\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.704503 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.704571 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f624\" (UniqueName: \"kubernetes.io/projected/cff60748-f83c-489e-a7fc-19fb4473f029-kube-api-access-6f624\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.704593 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.704612 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.704653 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-dns-svc\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.705759 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-dns-svc\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.705948 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-config\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.706430 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.706920 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.709230 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.725671 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.737087 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f624\" (UniqueName: \"kubernetes.io/projected/cff60748-f83c-489e-a7fc-19fb4473f029-kube-api-access-6f624\") pod \"dnsmasq-dns-757b4f8459-tgfgn\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.767447 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.797853 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.861902 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7mcq4"] Dec 01 09:54:00 crc kubenswrapper[4933]: I1201 09:54:00.939098 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7mcq4" event={"ID":"cd09dcd1-e9a3-40dd-9497-11d652bad925","Type":"ContainerStarted","Data":"ce69db3267e27be3cad5103e4d6e1c7e57451055f7e6cc5c04bb4ed1c94b66ab"} Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.080574 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.095813 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5tkhr"] Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.097897 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.100537 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.100830 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.106401 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5tkhr"] Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.193036 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.224705 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c9fz\" (UniqueName: \"kubernetes.io/projected/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-kube-api-access-2c9fz\") pod \"nova-cell1-conductor-db-sync-5tkhr\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.224795 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5tkhr\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.224828 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-scripts\") pod \"nova-cell1-conductor-db-sync-5tkhr\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.224887 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-config-data\") pod \"nova-cell1-conductor-db-sync-5tkhr\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.329738 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c9fz\" (UniqueName: \"kubernetes.io/projected/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-kube-api-access-2c9fz\") pod \"nova-cell1-conductor-db-sync-5tkhr\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.330425 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5tkhr\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.330488 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-scripts\") pod \"nova-cell1-conductor-db-sync-5tkhr\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.330593 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-config-data\") pod \"nova-cell1-conductor-db-sync-5tkhr\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.343130 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-scripts\") pod \"nova-cell1-conductor-db-sync-5tkhr\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.353282 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c9fz\" (UniqueName: \"kubernetes.io/projected/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-kube-api-access-2c9fz\") pod \"nova-cell1-conductor-db-sync-5tkhr\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.353354 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5tkhr\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.354455 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-config-data\") pod \"nova-cell1-conductor-db-sync-5tkhr\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.494935 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.502566 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.590450 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.606849 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-tgfgn"] Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.960918 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7mcq4" event={"ID":"cd09dcd1-e9a3-40dd-9497-11d652bad925","Type":"ContainerStarted","Data":"0bc9570458a3846b9903513456966d5f2d858536902d211d6f3939f143091769"} Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.973716 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"caa4bf9b-bdd7-4f39-aead-94858835d5f1","Type":"ContainerStarted","Data":"574a422b58bdc2a45f63aeab791c0979d409ddb7327d2cb4844c3049cc573a9d"} Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.975486 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06310ed0-6317-4a9f-a538-5813d66bc5cd","Type":"ContainerStarted","Data":"7735ea5c601afe45307587f0d40b0e0edaeec1c256167faa0f972b3f9734446b"} Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.976489 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc6eb5c1-55f5-4bf4-a746-73fd82be6850","Type":"ContainerStarted","Data":"fd5d31e2ec2c55f28214604916ca451e77e97ec131f9e10b34e88d4f66fedb4d"} Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.977344 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d672f0c-b605-4904-bbf9-45b2bd0a1f92","Type":"ContainerStarted","Data":"33c942eca76106f67e57c5c48c611f468bddb33a6ca4440b285d5ee61b152a52"} Dec 01 09:54:01 crc kubenswrapper[4933]: I1201 09:54:01.978468 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" event={"ID":"cff60748-f83c-489e-a7fc-19fb4473f029","Type":"ContainerStarted","Data":"f57aea152a8bd7475734aebf2fb533467e8822a5fcdec3ac9413e34816b8504d"} Dec 01 09:54:02 crc kubenswrapper[4933]: I1201 09:54:02.014476 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7mcq4" podStartSLOduration=3.01445271 podStartE2EDuration="3.01445271s" podCreationTimestamp="2025-12-01 09:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:54:01.999471543 +0000 UTC m=+1332.641195158" watchObservedRunningTime="2025-12-01 09:54:02.01445271 +0000 UTC m=+1332.656176325" Dec 01 09:54:02 crc kubenswrapper[4933]: I1201 09:54:02.168605 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5tkhr"] Dec 01 09:54:02 crc kubenswrapper[4933]: W1201 09:54:02.182111 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd8e4327_7bd2_49f1_ab79_e8e9eeb82e9a.slice/crio-6978cb26fd0f646376a31cbcbc508d3c7ba37b09d62be033b11b781c70fe31fa WatchSource:0}: Error finding container 6978cb26fd0f646376a31cbcbc508d3c7ba37b09d62be033b11b781c70fe31fa: Status 404 returned error can't find the container with id 6978cb26fd0f646376a31cbcbc508d3c7ba37b09d62be033b11b781c70fe31fa Dec 01 09:54:02 crc kubenswrapper[4933]: I1201 09:54:02.998813 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5tkhr" event={"ID":"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a","Type":"ContainerStarted","Data":"fbb24a7e5a772eba1990812c40849d9e691917207c4dd782c53e7dbf25c80bb5"} Dec 01 09:54:02 crc kubenswrapper[4933]: I1201 09:54:02.999331 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5tkhr" event={"ID":"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a","Type":"ContainerStarted","Data":"6978cb26fd0f646376a31cbcbc508d3c7ba37b09d62be033b11b781c70fe31fa"} Dec 01 09:54:03 crc kubenswrapper[4933]: I1201 09:54:03.010177 4933 generic.go:334] "Generic (PLEG): container finished" podID="cff60748-f83c-489e-a7fc-19fb4473f029" containerID="1ce9a84567668ebd6480024e7f63f8bc56d72d7f1cdf92649d64bda2ada2bfd7" exitCode=0 Dec 01 09:54:03 crc kubenswrapper[4933]: I1201 09:54:03.012017 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" event={"ID":"cff60748-f83c-489e-a7fc-19fb4473f029","Type":"ContainerDied","Data":"1ce9a84567668ebd6480024e7f63f8bc56d72d7f1cdf92649d64bda2ada2bfd7"} Dec 01 09:54:03 crc kubenswrapper[4933]: I1201 09:54:03.033679 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5tkhr" podStartSLOduration=2.033638317 podStartE2EDuration="2.033638317s" podCreationTimestamp="2025-12-01 09:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:54:03.021656415 +0000 UTC m=+1333.663380030" watchObservedRunningTime="2025-12-01 09:54:03.033638317 +0000 UTC m=+1333.675361942" Dec 01 09:54:03 crc kubenswrapper[4933]: I1201 09:54:03.769460 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:03 crc kubenswrapper[4933]: I1201 09:54:03.805790 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.059561 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d672f0c-b605-4904-bbf9-45b2bd0a1f92","Type":"ContainerStarted","Data":"f514b348e5e7e6acc577f77f88dcb18b88e1f50de5754ec566023827eedbd407"} Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.060201 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d672f0c-b605-4904-bbf9-45b2bd0a1f92","Type":"ContainerStarted","Data":"98b76d2ba0a8f08d70a254415999c6a6c4afce06e3e09c36a84fd895b85f4426"} Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.066976 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" event={"ID":"cff60748-f83c-489e-a7fc-19fb4473f029","Type":"ContainerStarted","Data":"5d79a727e7b8333af79887f96ba380230eeef8aa55155d605b16c843ae6b1862"} Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.067064 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.069008 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"caa4bf9b-bdd7-4f39-aead-94858835d5f1","Type":"ContainerStarted","Data":"b47f736cbaa01b4cdaff4ef2cd5c7d3df2cf9f7e71a9b717aa781ef2b6737d00"} Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.069131 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="caa4bf9b-bdd7-4f39-aead-94858835d5f1" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b47f736cbaa01b4cdaff4ef2cd5c7d3df2cf9f7e71a9b717aa781ef2b6737d00" gracePeriod=30 Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.070974 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06310ed0-6317-4a9f-a538-5813d66bc5cd","Type":"ContainerStarted","Data":"1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2"} Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.071012 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06310ed0-6317-4a9f-a538-5813d66bc5cd","Type":"ContainerStarted","Data":"a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f"} Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.071092 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="06310ed0-6317-4a9f-a538-5813d66bc5cd" containerName="nova-metadata-log" containerID="cri-o://a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f" gracePeriod=30 Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.071246 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="06310ed0-6317-4a9f-a538-5813d66bc5cd" containerName="nova-metadata-metadata" containerID="cri-o://1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2" gracePeriod=30 Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.075988 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc6eb5c1-55f5-4bf4-a746-73fd82be6850","Type":"ContainerStarted","Data":"2fad913540c075b770ed527f9d26e74cb870956bfc917580902a0358ca9a2f58"} Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.092593 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.625275672 podStartE2EDuration="8.092573137s" podCreationTimestamp="2025-12-01 09:53:59 +0000 UTC" firstStartedPulling="2025-12-01 09:54:01.213145097 +0000 UTC m=+1331.854868722" lastFinishedPulling="2025-12-01 09:54:05.680442572 +0000 UTC m=+1336.322166187" observedRunningTime="2025-12-01 09:54:07.085528175 +0000 UTC m=+1337.727251790" watchObservedRunningTime="2025-12-01 09:54:07.092573137 +0000 UTC m=+1337.734296752" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.117915 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" podStartSLOduration=7.117882356 podStartE2EDuration="7.117882356s" podCreationTimestamp="2025-12-01 09:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:54:07.110903085 +0000 UTC m=+1337.752626700" watchObservedRunningTime="2025-12-01 09:54:07.117882356 +0000 UTC m=+1337.759605971" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.155167 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.10368575 podStartE2EDuration="7.155136657s" podCreationTimestamp="2025-12-01 09:54:00 +0000 UTC" firstStartedPulling="2025-12-01 09:54:01.631986358 +0000 UTC m=+1332.273709973" lastFinishedPulling="2025-12-01 09:54:05.683437265 +0000 UTC m=+1336.325160880" observedRunningTime="2025-12-01 09:54:07.152226826 +0000 UTC m=+1337.793950461" watchObservedRunningTime="2025-12-01 09:54:07.155136657 +0000 UTC m=+1337.796860272" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.166256 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.998681774 podStartE2EDuration="7.166196358s" podCreationTimestamp="2025-12-01 09:54:00 +0000 UTC" firstStartedPulling="2025-12-01 09:54:01.512883656 +0000 UTC m=+1332.154607271" lastFinishedPulling="2025-12-01 09:54:05.68039823 +0000 UTC m=+1336.322121855" observedRunningTime="2025-12-01 09:54:07.134254926 +0000 UTC m=+1337.775978541" watchObservedRunningTime="2025-12-01 09:54:07.166196358 +0000 UTC m=+1337.807919983" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.178172 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.609671412 podStartE2EDuration="7.17814307s" podCreationTimestamp="2025-12-01 09:54:00 +0000 UTC" firstStartedPulling="2025-12-01 09:54:01.111909212 +0000 UTC m=+1331.753632827" lastFinishedPulling="2025-12-01 09:54:05.68038087 +0000 UTC m=+1336.322104485" observedRunningTime="2025-12-01 09:54:07.169810106 +0000 UTC m=+1337.811533741" watchObservedRunningTime="2025-12-01 09:54:07.17814307 +0000 UTC m=+1337.819866685" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.664042 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.830975 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06310ed0-6317-4a9f-a538-5813d66bc5cd-logs\") pod \"06310ed0-6317-4a9f-a538-5813d66bc5cd\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.831369 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06310ed0-6317-4a9f-a538-5813d66bc5cd-logs" (OuterVolumeSpecName: "logs") pod "06310ed0-6317-4a9f-a538-5813d66bc5cd" (UID: "06310ed0-6317-4a9f-a538-5813d66bc5cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.831558 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06310ed0-6317-4a9f-a538-5813d66bc5cd-combined-ca-bundle\") pod \"06310ed0-6317-4a9f-a538-5813d66bc5cd\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.831633 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl9jn\" (UniqueName: \"kubernetes.io/projected/06310ed0-6317-4a9f-a538-5813d66bc5cd-kube-api-access-fl9jn\") pod \"06310ed0-6317-4a9f-a538-5813d66bc5cd\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.832716 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06310ed0-6317-4a9f-a538-5813d66bc5cd-config-data\") pod \"06310ed0-6317-4a9f-a538-5813d66bc5cd\" (UID: \"06310ed0-6317-4a9f-a538-5813d66bc5cd\") " Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.835647 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06310ed0-6317-4a9f-a538-5813d66bc5cd-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.848506 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06310ed0-6317-4a9f-a538-5813d66bc5cd-kube-api-access-fl9jn" (OuterVolumeSpecName: "kube-api-access-fl9jn") pod "06310ed0-6317-4a9f-a538-5813d66bc5cd" (UID: "06310ed0-6317-4a9f-a538-5813d66bc5cd"). InnerVolumeSpecName "kube-api-access-fl9jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.865748 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06310ed0-6317-4a9f-a538-5813d66bc5cd-config-data" (OuterVolumeSpecName: "config-data") pod "06310ed0-6317-4a9f-a538-5813d66bc5cd" (UID: "06310ed0-6317-4a9f-a538-5813d66bc5cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.882491 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06310ed0-6317-4a9f-a538-5813d66bc5cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06310ed0-6317-4a9f-a538-5813d66bc5cd" (UID: "06310ed0-6317-4a9f-a538-5813d66bc5cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.937975 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl9jn\" (UniqueName: \"kubernetes.io/projected/06310ed0-6317-4a9f-a538-5813d66bc5cd-kube-api-access-fl9jn\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.938020 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06310ed0-6317-4a9f-a538-5813d66bc5cd-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:07 crc kubenswrapper[4933]: I1201 09:54:07.938039 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06310ed0-6317-4a9f-a538-5813d66bc5cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.087681 4933 generic.go:334] "Generic (PLEG): container finished" podID="06310ed0-6317-4a9f-a538-5813d66bc5cd" containerID="1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2" exitCode=0 Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.087711 4933 generic.go:334] "Generic (PLEG): container finished" podID="06310ed0-6317-4a9f-a538-5813d66bc5cd" containerID="a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f" exitCode=143 Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.087752 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.087863 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06310ed0-6317-4a9f-a538-5813d66bc5cd","Type":"ContainerDied","Data":"1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2"} Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.091427 4933 scope.go:117] "RemoveContainer" containerID="1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.091382 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06310ed0-6317-4a9f-a538-5813d66bc5cd","Type":"ContainerDied","Data":"a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f"} Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.091715 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06310ed0-6317-4a9f-a538-5813d66bc5cd","Type":"ContainerDied","Data":"7735ea5c601afe45307587f0d40b0e0edaeec1c256167faa0f972b3f9734446b"} Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.150442 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.168650 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.183316 4933 scope.go:117] "RemoveContainer" containerID="a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.195818 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:08 crc kubenswrapper[4933]: E1201 09:54:08.196521 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06310ed0-6317-4a9f-a538-5813d66bc5cd" containerName="nova-metadata-metadata" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.196551 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="06310ed0-6317-4a9f-a538-5813d66bc5cd" containerName="nova-metadata-metadata" Dec 01 09:54:08 crc kubenswrapper[4933]: E1201 09:54:08.196574 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06310ed0-6317-4a9f-a538-5813d66bc5cd" containerName="nova-metadata-log" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.196582 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="06310ed0-6317-4a9f-a538-5813d66bc5cd" containerName="nova-metadata-log" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.196891 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="06310ed0-6317-4a9f-a538-5813d66bc5cd" containerName="nova-metadata-metadata" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.196920 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="06310ed0-6317-4a9f-a538-5813d66bc5cd" containerName="nova-metadata-log" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.198343 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.202767 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.207877 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.225274 4933 scope.go:117] "RemoveContainer" containerID="1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2" Dec 01 09:54:08 crc kubenswrapper[4933]: E1201 09:54:08.227368 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2\": container with ID starting with 1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2 not found: ID does not exist" containerID="1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.227445 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2"} err="failed to get container status \"1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2\": rpc error: code = NotFound desc = could not find container \"1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2\": container with ID starting with 1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2 not found: ID does not exist" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.227493 4933 scope.go:117] "RemoveContainer" containerID="a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f" Dec 01 09:54:08 crc kubenswrapper[4933]: E1201 09:54:08.231276 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f\": container with ID starting with a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f not found: ID does not exist" containerID="a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.231329 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f"} err="failed to get container status \"a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f\": rpc error: code = NotFound desc = could not find container \"a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f\": container with ID starting with a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f not found: ID does not exist" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.231351 4933 scope.go:117] "RemoveContainer" containerID="1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.233294 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.236231 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2"} err="failed to get container status \"1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2\": rpc error: code = NotFound desc = could not find container \"1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2\": container with ID starting with 1557ba7cfa6cab3a1cdfeab0dfadf680ae7b5f6cbb1c875e479591dae40f75a2 not found: ID does not exist" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.236334 4933 scope.go:117] "RemoveContainer" containerID="a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.236821 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f"} err="failed to get container status \"a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f\": rpc error: code = NotFound desc = could not find container \"a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f\": container with ID starting with a4690ad64d204d9d60fc6a8ee0cf09ca9d2f96067e022621c8da72dcbc3ee37f not found: ID does not exist" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.365470 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.365817 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.365988 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6c5c9d1-7fca-4d76-8336-e3d403524edc-logs\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.366102 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-config-data\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.366188 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8t5w\" (UniqueName: \"kubernetes.io/projected/e6c5c9d1-7fca-4d76-8336-e3d403524edc-kube-api-access-t8t5w\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.467864 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8t5w\" (UniqueName: \"kubernetes.io/projected/e6c5c9d1-7fca-4d76-8336-e3d403524edc-kube-api-access-t8t5w\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.467953 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.468068 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.468144 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6c5c9d1-7fca-4d76-8336-e3d403524edc-logs\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.468206 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-config-data\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.468989 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6c5c9d1-7fca-4d76-8336-e3d403524edc-logs\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.474292 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.474621 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.475285 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-config-data\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.502392 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8t5w\" (UniqueName: \"kubernetes.io/projected/e6c5c9d1-7fca-4d76-8336-e3d403524edc-kube-api-access-t8t5w\") pod \"nova-metadata-0\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " pod="openstack/nova-metadata-0" Dec 01 09:54:08 crc kubenswrapper[4933]: I1201 09:54:08.538274 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:54:09 crc kubenswrapper[4933]: W1201 09:54:09.033887 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6c5c9d1_7fca_4d76_8336_e3d403524edc.slice/crio-61e06437948edf0c26e5363d9b631a7b1a46bef6a5076391b86e2570d63a728c WatchSource:0}: Error finding container 61e06437948edf0c26e5363d9b631a7b1a46bef6a5076391b86e2570d63a728c: Status 404 returned error can't find the container with id 61e06437948edf0c26e5363d9b631a7b1a46bef6a5076391b86e2570d63a728c Dec 01 09:54:09 crc kubenswrapper[4933]: I1201 09:54:09.035741 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:09 crc kubenswrapper[4933]: I1201 09:54:09.100819 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6c5c9d1-7fca-4d76-8336-e3d403524edc","Type":"ContainerStarted","Data":"61e06437948edf0c26e5363d9b631a7b1a46bef6a5076391b86e2570d63a728c"} Dec 01 09:54:09 crc kubenswrapper[4933]: I1201 09:54:09.686323 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06310ed0-6317-4a9f-a538-5813d66bc5cd" path="/var/lib/kubelet/pods/06310ed0-6317-4a9f-a538-5813d66bc5cd/volumes" Dec 01 09:54:10 crc kubenswrapper[4933]: I1201 09:54:10.121291 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6c5c9d1-7fca-4d76-8336-e3d403524edc","Type":"ContainerStarted","Data":"de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da"} Dec 01 09:54:10 crc kubenswrapper[4933]: I1201 09:54:10.121795 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6c5c9d1-7fca-4d76-8336-e3d403524edc","Type":"ContainerStarted","Data":"67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3"} Dec 01 09:54:10 crc kubenswrapper[4933]: I1201 09:54:10.148615 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.148583416 podStartE2EDuration="2.148583416s" podCreationTimestamp="2025-12-01 09:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:54:10.143499481 +0000 UTC m=+1340.785223096" watchObservedRunningTime="2025-12-01 09:54:10.148583416 +0000 UTC m=+1340.790307031" Dec 01 09:54:10 crc kubenswrapper[4933]: I1201 09:54:10.390329 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 01 09:54:10 crc kubenswrapper[4933]: I1201 09:54:10.390408 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 01 09:54:10 crc kubenswrapper[4933]: I1201 09:54:10.429452 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 01 09:54:10 crc kubenswrapper[4933]: I1201 09:54:10.465648 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:54:10 crc kubenswrapper[4933]: I1201 09:54:10.466011 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:54:10 crc kubenswrapper[4933]: I1201 09:54:10.726703 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:11 crc kubenswrapper[4933]: I1201 09:54:11.135101 4933 generic.go:334] "Generic (PLEG): container finished" podID="cd09dcd1-e9a3-40dd-9497-11d652bad925" containerID="0bc9570458a3846b9903513456966d5f2d858536902d211d6f3939f143091769" exitCode=0 Dec 01 09:54:11 crc kubenswrapper[4933]: I1201 09:54:11.135329 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7mcq4" event={"ID":"cd09dcd1-e9a3-40dd-9497-11d652bad925","Type":"ContainerDied","Data":"0bc9570458a3846b9903513456966d5f2d858536902d211d6f3939f143091769"} Dec 01 09:54:11 crc kubenswrapper[4933]: I1201 09:54:11.178415 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 01 09:54:11 crc kubenswrapper[4933]: I1201 09:54:11.548641 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5d672f0c-b605-4904-bbf9-45b2bd0a1f92" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:54:11 crc kubenswrapper[4933]: I1201 09:54:11.548850 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5d672f0c-b605-4904-bbf9-45b2bd0a1f92" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:54:12 crc kubenswrapper[4933]: I1201 09:54:12.560120 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:54:12 crc kubenswrapper[4933]: I1201 09:54:12.662784 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrhk7\" (UniqueName: \"kubernetes.io/projected/cd09dcd1-e9a3-40dd-9497-11d652bad925-kube-api-access-xrhk7\") pod \"cd09dcd1-e9a3-40dd-9497-11d652bad925\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " Dec 01 09:54:12 crc kubenswrapper[4933]: I1201 09:54:12.662965 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-combined-ca-bundle\") pod \"cd09dcd1-e9a3-40dd-9497-11d652bad925\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " Dec 01 09:54:12 crc kubenswrapper[4933]: I1201 09:54:12.663034 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-scripts\") pod \"cd09dcd1-e9a3-40dd-9497-11d652bad925\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " Dec 01 09:54:12 crc kubenswrapper[4933]: I1201 09:54:12.663130 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-config-data\") pod \"cd09dcd1-e9a3-40dd-9497-11d652bad925\" (UID: \"cd09dcd1-e9a3-40dd-9497-11d652bad925\") " Dec 01 09:54:12 crc kubenswrapper[4933]: I1201 09:54:12.671859 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd09dcd1-e9a3-40dd-9497-11d652bad925-kube-api-access-xrhk7" (OuterVolumeSpecName: "kube-api-access-xrhk7") pod "cd09dcd1-e9a3-40dd-9497-11d652bad925" (UID: "cd09dcd1-e9a3-40dd-9497-11d652bad925"). InnerVolumeSpecName "kube-api-access-xrhk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:12 crc kubenswrapper[4933]: I1201 09:54:12.675233 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-scripts" (OuterVolumeSpecName: "scripts") pod "cd09dcd1-e9a3-40dd-9497-11d652bad925" (UID: "cd09dcd1-e9a3-40dd-9497-11d652bad925"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:12 crc kubenswrapper[4933]: I1201 09:54:12.707271 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd09dcd1-e9a3-40dd-9497-11d652bad925" (UID: "cd09dcd1-e9a3-40dd-9497-11d652bad925"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:12 crc kubenswrapper[4933]: I1201 09:54:12.711742 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-config-data" (OuterVolumeSpecName: "config-data") pod "cd09dcd1-e9a3-40dd-9497-11d652bad925" (UID: "cd09dcd1-e9a3-40dd-9497-11d652bad925"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:12 crc kubenswrapper[4933]: I1201 09:54:12.765599 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrhk7\" (UniqueName: \"kubernetes.io/projected/cd09dcd1-e9a3-40dd-9497-11d652bad925-kube-api-access-xrhk7\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:12 crc kubenswrapper[4933]: I1201 09:54:12.765649 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:12 crc kubenswrapper[4933]: I1201 09:54:12.765668 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:12 crc kubenswrapper[4933]: I1201 09:54:12.765683 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd09dcd1-e9a3-40dd-9497-11d652bad925-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:13 crc kubenswrapper[4933]: I1201 09:54:13.163507 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7mcq4" event={"ID":"cd09dcd1-e9a3-40dd-9497-11d652bad925","Type":"ContainerDied","Data":"ce69db3267e27be3cad5103e4d6e1c7e57451055f7e6cc5c04bb4ed1c94b66ab"} Dec 01 09:54:13 crc kubenswrapper[4933]: I1201 09:54:13.163997 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce69db3267e27be3cad5103e4d6e1c7e57451055f7e6cc5c04bb4ed1c94b66ab" Dec 01 09:54:13 crc kubenswrapper[4933]: I1201 09:54:13.164080 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7mcq4" Dec 01 09:54:13 crc kubenswrapper[4933]: I1201 09:54:13.357845 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:13 crc kubenswrapper[4933]: I1201 09:54:13.358260 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5d672f0c-b605-4904-bbf9-45b2bd0a1f92" containerName="nova-api-log" containerID="cri-o://98b76d2ba0a8f08d70a254415999c6a6c4afce06e3e09c36a84fd895b85f4426" gracePeriod=30 Dec 01 09:54:13 crc kubenswrapper[4933]: I1201 09:54:13.358373 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5d672f0c-b605-4904-bbf9-45b2bd0a1f92" containerName="nova-api-api" containerID="cri-o://f514b348e5e7e6acc577f77f88dcb18b88e1f50de5754ec566023827eedbd407" gracePeriod=30 Dec 01 09:54:13 crc kubenswrapper[4933]: I1201 09:54:13.378224 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:54:13 crc kubenswrapper[4933]: I1201 09:54:13.378539 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="dc6eb5c1-55f5-4bf4-a746-73fd82be6850" containerName="nova-scheduler-scheduler" containerID="cri-o://2fad913540c075b770ed527f9d26e74cb870956bfc917580902a0358ca9a2f58" gracePeriod=30 Dec 01 09:54:13 crc kubenswrapper[4933]: I1201 09:54:13.412552 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:13 crc kubenswrapper[4933]: I1201 09:54:13.412884 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e6c5c9d1-7fca-4d76-8336-e3d403524edc" containerName="nova-metadata-log" containerID="cri-o://67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3" gracePeriod=30 Dec 01 09:54:13 crc kubenswrapper[4933]: I1201 09:54:13.413015 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e6c5c9d1-7fca-4d76-8336-e3d403524edc" containerName="nova-metadata-metadata" containerID="cri-o://de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da" gracePeriod=30 Dec 01 09:54:13 crc kubenswrapper[4933]: I1201 09:54:13.539631 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:54:13 crc kubenswrapper[4933]: I1201 09:54:13.539695 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.060558 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.176081 4933 generic.go:334] "Generic (PLEG): container finished" podID="e6c5c9d1-7fca-4d76-8336-e3d403524edc" containerID="de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da" exitCode=0 Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.176614 4933 generic.go:334] "Generic (PLEG): container finished" podID="e6c5c9d1-7fca-4d76-8336-e3d403524edc" containerID="67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3" exitCode=143 Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.176137 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6c5c9d1-7fca-4d76-8336-e3d403524edc","Type":"ContainerDied","Data":"de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da"} Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.176207 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.176732 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6c5c9d1-7fca-4d76-8336-e3d403524edc","Type":"ContainerDied","Data":"67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3"} Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.176770 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6c5c9d1-7fca-4d76-8336-e3d403524edc","Type":"ContainerDied","Data":"61e06437948edf0c26e5363d9b631a7b1a46bef6a5076391b86e2570d63a728c"} Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.176843 4933 scope.go:117] "RemoveContainer" containerID="de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.179726 4933 generic.go:334] "Generic (PLEG): container finished" podID="5d672f0c-b605-4904-bbf9-45b2bd0a1f92" containerID="98b76d2ba0a8f08d70a254415999c6a6c4afce06e3e09c36a84fd895b85f4426" exitCode=143 Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.179832 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d672f0c-b605-4904-bbf9-45b2bd0a1f92","Type":"ContainerDied","Data":"98b76d2ba0a8f08d70a254415999c6a6c4afce06e3e09c36a84fd895b85f4426"} Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.186408 4933 generic.go:334] "Generic (PLEG): container finished" podID="dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a" containerID="fbb24a7e5a772eba1990812c40849d9e691917207c4dd782c53e7dbf25c80bb5" exitCode=0 Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.186452 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5tkhr" event={"ID":"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a","Type":"ContainerDied","Data":"fbb24a7e5a772eba1990812c40849d9e691917207c4dd782c53e7dbf25c80bb5"} Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.199999 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6c5c9d1-7fca-4d76-8336-e3d403524edc-logs\") pod \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.200239 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-nova-metadata-tls-certs\") pod \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.200560 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8t5w\" (UniqueName: \"kubernetes.io/projected/e6c5c9d1-7fca-4d76-8336-e3d403524edc-kube-api-access-t8t5w\") pod \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.200630 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-combined-ca-bundle\") pod \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.200664 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6c5c9d1-7fca-4d76-8336-e3d403524edc-logs" (OuterVolumeSpecName: "logs") pod "e6c5c9d1-7fca-4d76-8336-e3d403524edc" (UID: "e6c5c9d1-7fca-4d76-8336-e3d403524edc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.200698 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-config-data\") pod \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\" (UID: \"e6c5c9d1-7fca-4d76-8336-e3d403524edc\") " Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.202903 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6c5c9d1-7fca-4d76-8336-e3d403524edc-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.210066 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6c5c9d1-7fca-4d76-8336-e3d403524edc-kube-api-access-t8t5w" (OuterVolumeSpecName: "kube-api-access-t8t5w") pod "e6c5c9d1-7fca-4d76-8336-e3d403524edc" (UID: "e6c5c9d1-7fca-4d76-8336-e3d403524edc"). InnerVolumeSpecName "kube-api-access-t8t5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.228719 4933 scope.go:117] "RemoveContainer" containerID="67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.243767 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6c5c9d1-7fca-4d76-8336-e3d403524edc" (UID: "e6c5c9d1-7fca-4d76-8336-e3d403524edc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.252696 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-config-data" (OuterVolumeSpecName: "config-data") pod "e6c5c9d1-7fca-4d76-8336-e3d403524edc" (UID: "e6c5c9d1-7fca-4d76-8336-e3d403524edc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.254596 4933 scope.go:117] "RemoveContainer" containerID="de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da" Dec 01 09:54:14 crc kubenswrapper[4933]: E1201 09:54:14.255078 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da\": container with ID starting with de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da not found: ID does not exist" containerID="de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.255128 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da"} err="failed to get container status \"de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da\": rpc error: code = NotFound desc = could not find container \"de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da\": container with ID starting with de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da not found: ID does not exist" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.255158 4933 scope.go:117] "RemoveContainer" containerID="67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3" Dec 01 09:54:14 crc kubenswrapper[4933]: E1201 09:54:14.255628 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3\": container with ID starting with 67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3 not found: ID does not exist" containerID="67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.255657 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3"} err="failed to get container status \"67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3\": rpc error: code = NotFound desc = could not find container \"67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3\": container with ID starting with 67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3 not found: ID does not exist" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.255672 4933 scope.go:117] "RemoveContainer" containerID="de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.255917 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da"} err="failed to get container status \"de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da\": rpc error: code = NotFound desc = could not find container \"de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da\": container with ID starting with de1287f63fa4aeee5e46e33e7bb9061862653c5617b126ece25492a3de1c77da not found: ID does not exist" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.255941 4933 scope.go:117] "RemoveContainer" containerID="67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.256159 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3"} err="failed to get container status \"67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3\": rpc error: code = NotFound desc = could not find container \"67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3\": container with ID starting with 67155c7cf51b919ff1429262965c239ca2e9ce44d5b9bd86547036b30ad099c3 not found: ID does not exist" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.269842 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e6c5c9d1-7fca-4d76-8336-e3d403524edc" (UID: "e6c5c9d1-7fca-4d76-8336-e3d403524edc"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.305082 4933 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.305151 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8t5w\" (UniqueName: \"kubernetes.io/projected/e6c5c9d1-7fca-4d76-8336-e3d403524edc-kube-api-access-t8t5w\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.305163 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.305175 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c5c9d1-7fca-4d76-8336-e3d403524edc-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.601100 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.611056 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.627850 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:14 crc kubenswrapper[4933]: E1201 09:54:14.628530 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c5c9d1-7fca-4d76-8336-e3d403524edc" containerName="nova-metadata-log" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.628554 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c5c9d1-7fca-4d76-8336-e3d403524edc" containerName="nova-metadata-log" Dec 01 09:54:14 crc kubenswrapper[4933]: E1201 09:54:14.628578 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c5c9d1-7fca-4d76-8336-e3d403524edc" containerName="nova-metadata-metadata" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.628588 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c5c9d1-7fca-4d76-8336-e3d403524edc" containerName="nova-metadata-metadata" Dec 01 09:54:14 crc kubenswrapper[4933]: E1201 09:54:14.628619 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd09dcd1-e9a3-40dd-9497-11d652bad925" containerName="nova-manage" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.628627 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd09dcd1-e9a3-40dd-9497-11d652bad925" containerName="nova-manage" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.628817 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c5c9d1-7fca-4d76-8336-e3d403524edc" containerName="nova-metadata-log" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.628839 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd09dcd1-e9a3-40dd-9497-11d652bad925" containerName="nova-manage" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.628848 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c5c9d1-7fca-4d76-8336-e3d403524edc" containerName="nova-metadata-metadata" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.634637 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.639966 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.643229 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.682379 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.712866 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-config-data\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.712955 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b42cd5bd-bb2b-430b-a321-8da42af87665-logs\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.713449 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.713663 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf8j2\" (UniqueName: \"kubernetes.io/projected/b42cd5bd-bb2b-430b-a321-8da42af87665-kube-api-access-kf8j2\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.713842 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.815648 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.815823 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf8j2\" (UniqueName: \"kubernetes.io/projected/b42cd5bd-bb2b-430b-a321-8da42af87665-kube-api-access-kf8j2\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.815876 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.815912 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-config-data\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.815956 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b42cd5bd-bb2b-430b-a321-8da42af87665-logs\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.816499 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b42cd5bd-bb2b-430b-a321-8da42af87665-logs\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.823210 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.823239 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.824189 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-config-data\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.834346 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf8j2\" (UniqueName: \"kubernetes.io/projected/b42cd5bd-bb2b-430b-a321-8da42af87665-kube-api-access-kf8j2\") pod \"nova-metadata-0\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " pod="openstack/nova-metadata-0" Dec 01 09:54:14 crc kubenswrapper[4933]: I1201 09:54:14.977108 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:54:15 crc kubenswrapper[4933]: E1201 09:54:15.396273 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fad913540c075b770ed527f9d26e74cb870956bfc917580902a0358ca9a2f58" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 09:54:15 crc kubenswrapper[4933]: E1201 09:54:15.399084 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fad913540c075b770ed527f9d26e74cb870956bfc917580902a0358ca9a2f58" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 09:54:15 crc kubenswrapper[4933]: E1201 09:54:15.401284 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fad913540c075b770ed527f9d26e74cb870956bfc917580902a0358ca9a2f58" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 09:54:15 crc kubenswrapper[4933]: E1201 09:54:15.401364 4933 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dc6eb5c1-55f5-4bf4-a746-73fd82be6850" containerName="nova-scheduler-scheduler" Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.497068 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:15 crc kubenswrapper[4933]: W1201 09:54:15.509351 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb42cd5bd_bb2b_430b_a321_8da42af87665.slice/crio-6b60c116c88c205569ff4a28f94ffa8e870dd998dd01924c12999e8f21a04246 WatchSource:0}: Error finding container 6b60c116c88c205569ff4a28f94ffa8e870dd998dd01924c12999e8f21a04246: Status 404 returned error can't find the container with id 6b60c116c88c205569ff4a28f94ffa8e870dd998dd01924c12999e8f21a04246 Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.682047 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.682048 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6c5c9d1-7fca-4d76-8336-e3d403524edc" path="/var/lib/kubelet/pods/e6c5c9d1-7fca-4d76-8336-e3d403524edc/volumes" Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.800738 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.837204 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-config-data\") pod \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.837274 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c9fz\" (UniqueName: \"kubernetes.io/projected/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-kube-api-access-2c9fz\") pod \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.837326 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-combined-ca-bundle\") pod \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.837550 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-scripts\") pod \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\" (UID: \"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a\") " Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.846934 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-kube-api-access-2c9fz" (OuterVolumeSpecName: "kube-api-access-2c9fz") pod "dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a" (UID: "dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a"). InnerVolumeSpecName "kube-api-access-2c9fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.852388 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-scripts" (OuterVolumeSpecName: "scripts") pod "dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a" (UID: "dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.872886 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9wb2z"] Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.873229 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" podUID="3a28f786-8a38-4e52-ba6c-21550508ca03" containerName="dnsmasq-dns" containerID="cri-o://e9c1eeae23045db9018521a1bf647918ded90ff9a9aa98fa3c10fff132f416b7" gracePeriod=10 Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.899247 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a" (UID: "dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.903202 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-config-data" (OuterVolumeSpecName: "config-data") pod "dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a" (UID: "dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.941699 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.941743 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c9fz\" (UniqueName: \"kubernetes.io/projected/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-kube-api-access-2c9fz\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.941755 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:15 crc kubenswrapper[4933]: I1201 09:54:15.941763 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.217954 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b42cd5bd-bb2b-430b-a321-8da42af87665","Type":"ContainerStarted","Data":"a87eea3a7fab63c68f661c8706f492398ea113b02854e0cd9555c7eb9a490fb0"} Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.219750 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b42cd5bd-bb2b-430b-a321-8da42af87665","Type":"ContainerStarted","Data":"6b60c116c88c205569ff4a28f94ffa8e870dd998dd01924c12999e8f21a04246"} Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.222681 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5tkhr" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.223683 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5tkhr" event={"ID":"dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a","Type":"ContainerDied","Data":"6978cb26fd0f646376a31cbcbc508d3c7ba37b09d62be033b11b781c70fe31fa"} Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.223760 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6978cb26fd0f646376a31cbcbc508d3c7ba37b09d62be033b11b781c70fe31fa" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.228427 4933 generic.go:334] "Generic (PLEG): container finished" podID="3a28f786-8a38-4e52-ba6c-21550508ca03" containerID="e9c1eeae23045db9018521a1bf647918ded90ff9a9aa98fa3c10fff132f416b7" exitCode=0 Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.228498 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" event={"ID":"3a28f786-8a38-4e52-ba6c-21550508ca03","Type":"ContainerDied","Data":"e9c1eeae23045db9018521a1bf647918ded90ff9a9aa98fa3c10fff132f416b7"} Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.309566 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 01 09:54:16 crc kubenswrapper[4933]: E1201 09:54:16.310198 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a" containerName="nova-cell1-conductor-db-sync" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.310241 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a" containerName="nova-cell1-conductor-db-sync" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.310581 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a" containerName="nova-cell1-conductor-db-sync" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.311544 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.316596 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.322016 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.401584 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.457887 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc4cn\" (UniqueName: \"kubernetes.io/projected/80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e-kube-api-access-sc4cn\") pod \"nova-cell1-conductor-0\" (UID: \"80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.458030 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.458111 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.559934 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.560057 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc4cn\" (UniqueName: \"kubernetes.io/projected/80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e-kube-api-access-sc4cn\") pod \"nova-cell1-conductor-0\" (UID: \"80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.560145 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.568501 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.569134 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.590821 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc4cn\" (UniqueName: \"kubernetes.io/projected/80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e-kube-api-access-sc4cn\") pod \"nova-cell1-conductor-0\" (UID: \"80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:54:16 crc kubenswrapper[4933]: I1201 09:54:16.650642 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.016829 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.063429 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.079205 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-swift-storage-0\") pod \"3a28f786-8a38-4e52-ba6c-21550508ca03\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.079841 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-ovsdbserver-nb\") pod \"3a28f786-8a38-4e52-ba6c-21550508ca03\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.080049 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-ovsdbserver-sb\") pod \"3a28f786-8a38-4e52-ba6c-21550508ca03\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.080158 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bnpq\" (UniqueName: \"kubernetes.io/projected/3a28f786-8a38-4e52-ba6c-21550508ca03-kube-api-access-2bnpq\") pod \"3a28f786-8a38-4e52-ba6c-21550508ca03\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.080399 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-svc\") pod \"3a28f786-8a38-4e52-ba6c-21550508ca03\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.080505 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-config\") pod \"3a28f786-8a38-4e52-ba6c-21550508ca03\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.109335 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a28f786-8a38-4e52-ba6c-21550508ca03-kube-api-access-2bnpq" (OuterVolumeSpecName: "kube-api-access-2bnpq") pod "3a28f786-8a38-4e52-ba6c-21550508ca03" (UID: "3a28f786-8a38-4e52-ba6c-21550508ca03"). InnerVolumeSpecName "kube-api-access-2bnpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.145395 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a28f786-8a38-4e52-ba6c-21550508ca03" (UID: "3a28f786-8a38-4e52-ba6c-21550508ca03"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.165615 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a28f786-8a38-4e52-ba6c-21550508ca03" (UID: "3a28f786-8a38-4e52-ba6c-21550508ca03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.181812 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-config" (OuterVolumeSpecName: "config") pod "3a28f786-8a38-4e52-ba6c-21550508ca03" (UID: "3a28f786-8a38-4e52-ba6c-21550508ca03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.182679 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3a28f786-8a38-4e52-ba6c-21550508ca03" (UID: "3a28f786-8a38-4e52-ba6c-21550508ca03"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.182772 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-logs\") pod \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.183319 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-swift-storage-0\") pod \"3a28f786-8a38-4e52-ba6c-21550508ca03\" (UID: \"3a28f786-8a38-4e52-ba6c-21550508ca03\") " Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.183522 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbnhz\" (UniqueName: \"kubernetes.io/projected/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-kube-api-access-lbnhz\") pod \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.183356 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-logs" (OuterVolumeSpecName: "logs") pod "5d672f0c-b605-4904-bbf9-45b2bd0a1f92" (UID: "5d672f0c-b605-4904-bbf9-45b2bd0a1f92"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:54:17 crc kubenswrapper[4933]: W1201 09:54:17.183498 4933 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3a28f786-8a38-4e52-ba6c-21550508ca03/volumes/kubernetes.io~configmap/dns-swift-storage-0 Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.183677 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3a28f786-8a38-4e52-ba6c-21550508ca03" (UID: "3a28f786-8a38-4e52-ba6c-21550508ca03"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.183770 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-config-data\") pod \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.184089 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-combined-ca-bundle\") pod \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\" (UID: \"5d672f0c-b605-4904-bbf9-45b2bd0a1f92\") " Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.185409 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.185529 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bnpq\" (UniqueName: \"kubernetes.io/projected/3a28f786-8a38-4e52-ba6c-21550508ca03-kube-api-access-2bnpq\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.185604 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.185668 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.185730 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.185786 4933 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.191534 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-kube-api-access-lbnhz" (OuterVolumeSpecName: "kube-api-access-lbnhz") pod "5d672f0c-b605-4904-bbf9-45b2bd0a1f92" (UID: "5d672f0c-b605-4904-bbf9-45b2bd0a1f92"). InnerVolumeSpecName "kube-api-access-lbnhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.195787 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a28f786-8a38-4e52-ba6c-21550508ca03" (UID: "3a28f786-8a38-4e52-ba6c-21550508ca03"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.217053 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-config-data" (OuterVolumeSpecName: "config-data") pod "5d672f0c-b605-4904-bbf9-45b2bd0a1f92" (UID: "5d672f0c-b605-4904-bbf9-45b2bd0a1f92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.220450 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d672f0c-b605-4904-bbf9-45b2bd0a1f92" (UID: "5d672f0c-b605-4904-bbf9-45b2bd0a1f92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.239979 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b42cd5bd-bb2b-430b-a321-8da42af87665","Type":"ContainerStarted","Data":"aa7fd287be4c94b09f11e1e73c1ab00b689e88b568b2c1812c2512fbdf25eafa"} Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.243868 4933 generic.go:334] "Generic (PLEG): container finished" podID="5d672f0c-b605-4904-bbf9-45b2bd0a1f92" containerID="f514b348e5e7e6acc577f77f88dcb18b88e1f50de5754ec566023827eedbd407" exitCode=0 Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.244085 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d672f0c-b605-4904-bbf9-45b2bd0a1f92","Type":"ContainerDied","Data":"f514b348e5e7e6acc577f77f88dcb18b88e1f50de5754ec566023827eedbd407"} Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.244149 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.244579 4933 scope.go:117] "RemoveContainer" containerID="f514b348e5e7e6acc577f77f88dcb18b88e1f50de5754ec566023827eedbd407" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.244457 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d672f0c-b605-4904-bbf9-45b2bd0a1f92","Type":"ContainerDied","Data":"33c942eca76106f67e57c5c48c611f468bddb33a6ca4440b285d5ee61b152a52"} Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.248453 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" event={"ID":"3a28f786-8a38-4e52-ba6c-21550508ca03","Type":"ContainerDied","Data":"9340c9e748ac0c0f36600deb17f3f4e55083e6a98905d5c49e02969165924e87"} Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.250098 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-9wb2z" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.262597 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.2625799300000002 podStartE2EDuration="3.26257993s" podCreationTimestamp="2025-12-01 09:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:54:17.256471431 +0000 UTC m=+1347.898195066" watchObservedRunningTime="2025-12-01 09:54:17.26257993 +0000 UTC m=+1347.904303545" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.296631 4933 scope.go:117] "RemoveContainer" containerID="98b76d2ba0a8f08d70a254415999c6a6c4afce06e3e09c36a84fd895b85f4426" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.298598 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.298629 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbnhz\" (UniqueName: \"kubernetes.io/projected/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-kube-api-access-lbnhz\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.298638 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d672f0c-b605-4904-bbf9-45b2bd0a1f92-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.298648 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a28f786-8a38-4e52-ba6c-21550508ca03-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.303787 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9wb2z"] Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.325660 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9wb2z"] Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.334864 4933 scope.go:117] "RemoveContainer" containerID="f514b348e5e7e6acc577f77f88dcb18b88e1f50de5754ec566023827eedbd407" Dec 01 09:54:17 crc kubenswrapper[4933]: E1201 09:54:17.336131 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f514b348e5e7e6acc577f77f88dcb18b88e1f50de5754ec566023827eedbd407\": container with ID starting with f514b348e5e7e6acc577f77f88dcb18b88e1f50de5754ec566023827eedbd407 not found: ID does not exist" containerID="f514b348e5e7e6acc577f77f88dcb18b88e1f50de5754ec566023827eedbd407" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.336192 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f514b348e5e7e6acc577f77f88dcb18b88e1f50de5754ec566023827eedbd407"} err="failed to get container status \"f514b348e5e7e6acc577f77f88dcb18b88e1f50de5754ec566023827eedbd407\": rpc error: code = NotFound desc = could not find container \"f514b348e5e7e6acc577f77f88dcb18b88e1f50de5754ec566023827eedbd407\": container with ID starting with f514b348e5e7e6acc577f77f88dcb18b88e1f50de5754ec566023827eedbd407 not found: ID does not exist" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.336229 4933 scope.go:117] "RemoveContainer" containerID="98b76d2ba0a8f08d70a254415999c6a6c4afce06e3e09c36a84fd895b85f4426" Dec 01 09:54:17 crc kubenswrapper[4933]: E1201 09:54:17.336588 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b76d2ba0a8f08d70a254415999c6a6c4afce06e3e09c36a84fd895b85f4426\": container with ID starting with 98b76d2ba0a8f08d70a254415999c6a6c4afce06e3e09c36a84fd895b85f4426 not found: ID does not exist" containerID="98b76d2ba0a8f08d70a254415999c6a6c4afce06e3e09c36a84fd895b85f4426" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.336616 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b76d2ba0a8f08d70a254415999c6a6c4afce06e3e09c36a84fd895b85f4426"} err="failed to get container status \"98b76d2ba0a8f08d70a254415999c6a6c4afce06e3e09c36a84fd895b85f4426\": rpc error: code = NotFound desc = could not find container \"98b76d2ba0a8f08d70a254415999c6a6c4afce06e3e09c36a84fd895b85f4426\": container with ID starting with 98b76d2ba0a8f08d70a254415999c6a6c4afce06e3e09c36a84fd895b85f4426 not found: ID does not exist" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.336638 4933 scope.go:117] "RemoveContainer" containerID="e9c1eeae23045db9018521a1bf647918ded90ff9a9aa98fa3c10fff132f416b7" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.340093 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 01 09:54:17 crc kubenswrapper[4933]: W1201 09:54:17.342869 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80bcbf70_b098_48a4_9dfe_fdb8c3b87e8e.slice/crio-1fd08cb15946afe04eb9d8467fd42f3a00193ce3f989dc20ec21e60a2b89d203 WatchSource:0}: Error finding container 1fd08cb15946afe04eb9d8467fd42f3a00193ce3f989dc20ec21e60a2b89d203: Status 404 returned error can't find the container with id 1fd08cb15946afe04eb9d8467fd42f3a00193ce3f989dc20ec21e60a2b89d203 Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.370733 4933 scope.go:117] "RemoveContainer" containerID="01e2a7dfb407ac46ccec394872ecd3dd9194e7eeb9fd10dd801898d2e29d2b0c" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.385430 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.394533 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.401058 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:17 crc kubenswrapper[4933]: E1201 09:54:17.401798 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d672f0c-b605-4904-bbf9-45b2bd0a1f92" containerName="nova-api-log" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.401825 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d672f0c-b605-4904-bbf9-45b2bd0a1f92" containerName="nova-api-log" Dec 01 09:54:17 crc kubenswrapper[4933]: E1201 09:54:17.401851 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a28f786-8a38-4e52-ba6c-21550508ca03" containerName="dnsmasq-dns" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.401861 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a28f786-8a38-4e52-ba6c-21550508ca03" containerName="dnsmasq-dns" Dec 01 09:54:17 crc kubenswrapper[4933]: E1201 09:54:17.401907 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d672f0c-b605-4904-bbf9-45b2bd0a1f92" containerName="nova-api-api" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.401916 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d672f0c-b605-4904-bbf9-45b2bd0a1f92" containerName="nova-api-api" Dec 01 09:54:17 crc kubenswrapper[4933]: E1201 09:54:17.401938 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a28f786-8a38-4e52-ba6c-21550508ca03" containerName="init" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.401945 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a28f786-8a38-4e52-ba6c-21550508ca03" containerName="init" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.402261 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a28f786-8a38-4e52-ba6c-21550508ca03" containerName="dnsmasq-dns" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.402353 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d672f0c-b605-4904-bbf9-45b2bd0a1f92" containerName="nova-api-log" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.402370 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d672f0c-b605-4904-bbf9-45b2bd0a1f92" containerName="nova-api-api" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.403977 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.406391 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.407945 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.505390 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbkkb\" (UniqueName: \"kubernetes.io/projected/a99ae55e-670e-451b-8c67-85d537db7077-kube-api-access-xbkkb\") pod \"nova-api-0\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.505529 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99ae55e-670e-451b-8c67-85d537db7077-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.505576 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a99ae55e-670e-451b-8c67-85d537db7077-logs\") pod \"nova-api-0\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.505662 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a99ae55e-670e-451b-8c67-85d537db7077-config-data\") pod \"nova-api-0\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.607526 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbkkb\" (UniqueName: \"kubernetes.io/projected/a99ae55e-670e-451b-8c67-85d537db7077-kube-api-access-xbkkb\") pod \"nova-api-0\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.607628 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99ae55e-670e-451b-8c67-85d537db7077-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.607667 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a99ae55e-670e-451b-8c67-85d537db7077-logs\") pod \"nova-api-0\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.607724 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a99ae55e-670e-451b-8c67-85d537db7077-config-data\") pod \"nova-api-0\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.608426 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a99ae55e-670e-451b-8c67-85d537db7077-logs\") pod \"nova-api-0\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.613369 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99ae55e-670e-451b-8c67-85d537db7077-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.613930 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a99ae55e-670e-451b-8c67-85d537db7077-config-data\") pod \"nova-api-0\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.629503 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbkkb\" (UniqueName: \"kubernetes.io/projected/a99ae55e-670e-451b-8c67-85d537db7077-kube-api-access-xbkkb\") pod \"nova-api-0\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " pod="openstack/nova-api-0" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.681399 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a28f786-8a38-4e52-ba6c-21550508ca03" path="/var/lib/kubelet/pods/3a28f786-8a38-4e52-ba6c-21550508ca03/volumes" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.682871 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d672f0c-b605-4904-bbf9-45b2bd0a1f92" path="/var/lib/kubelet/pods/5d672f0c-b605-4904-bbf9-45b2bd0a1f92/volumes" Dec 01 09:54:17 crc kubenswrapper[4933]: I1201 09:54:17.725238 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.209158 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.265038 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e","Type":"ContainerStarted","Data":"a599ca32abf4f555b2dc708c2a7989c43f4fd5f5595dc28cae3e0a8c1879a877"} Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.265160 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e","Type":"ContainerStarted","Data":"1fd08cb15946afe04eb9d8467fd42f3a00193ce3f989dc20ec21e60a2b89d203"} Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.266966 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.268966 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a99ae55e-670e-451b-8c67-85d537db7077","Type":"ContainerStarted","Data":"6fe8fbffa750b32136c104132aff04628b3cdbe94823a774f14af8057a2570f1"} Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.270894 4933 generic.go:334] "Generic (PLEG): container finished" podID="dc6eb5c1-55f5-4bf4-a746-73fd82be6850" containerID="2fad913540c075b770ed527f9d26e74cb870956bfc917580902a0358ca9a2f58" exitCode=0 Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.271021 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc6eb5c1-55f5-4bf4-a746-73fd82be6850","Type":"ContainerDied","Data":"2fad913540c075b770ed527f9d26e74cb870956bfc917580902a0358ca9a2f58"} Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.293176 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.293158488 podStartE2EDuration="2.293158488s" podCreationTimestamp="2025-12-01 09:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:54:18.284964337 +0000 UTC m=+1348.926687952" watchObservedRunningTime="2025-12-01 09:54:18.293158488 +0000 UTC m=+1348.934882103" Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.671726 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.844569 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btghc\" (UniqueName: \"kubernetes.io/projected/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-kube-api-access-btghc\") pod \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\" (UID: \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\") " Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.844653 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-config-data\") pod \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\" (UID: \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\") " Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.844837 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-combined-ca-bundle\") pod \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\" (UID: \"dc6eb5c1-55f5-4bf4-a746-73fd82be6850\") " Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.850617 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-kube-api-access-btghc" (OuterVolumeSpecName: "kube-api-access-btghc") pod "dc6eb5c1-55f5-4bf4-a746-73fd82be6850" (UID: "dc6eb5c1-55f5-4bf4-a746-73fd82be6850"). InnerVolumeSpecName "kube-api-access-btghc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.883013 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-config-data" (OuterVolumeSpecName: "config-data") pod "dc6eb5c1-55f5-4bf4-a746-73fd82be6850" (UID: "dc6eb5c1-55f5-4bf4-a746-73fd82be6850"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.887646 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc6eb5c1-55f5-4bf4-a746-73fd82be6850" (UID: "dc6eb5c1-55f5-4bf4-a746-73fd82be6850"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.947953 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btghc\" (UniqueName: \"kubernetes.io/projected/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-kube-api-access-btghc\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.948008 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:18 crc kubenswrapper[4933]: I1201 09:54:18.948019 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6eb5c1-55f5-4bf4-a746-73fd82be6850-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.291059 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a99ae55e-670e-451b-8c67-85d537db7077","Type":"ContainerStarted","Data":"9d66eff247bef5885648e8753d25f82dc25ff7240f6f0c5f7e698fd0a14461bd"} Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.291522 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a99ae55e-670e-451b-8c67-85d537db7077","Type":"ContainerStarted","Data":"198a8e438ab377bad55d3001d83a73b7a40f7286e35ee2f4d5af46d26825eece"} Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.295416 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc6eb5c1-55f5-4bf4-a746-73fd82be6850","Type":"ContainerDied","Data":"fd5d31e2ec2c55f28214604916ca451e77e97ec131f9e10b34e88d4f66fedb4d"} Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.295482 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.295492 4933 scope.go:117] "RemoveContainer" containerID="2fad913540c075b770ed527f9d26e74cb870956bfc917580902a0358ca9a2f58" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.319322 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.319282576 podStartE2EDuration="2.319282576s" podCreationTimestamp="2025-12-01 09:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:54:19.314749155 +0000 UTC m=+1349.956472770" watchObservedRunningTime="2025-12-01 09:54:19.319282576 +0000 UTC m=+1349.961006191" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.344792 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.373770 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.385485 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:54:19 crc kubenswrapper[4933]: E1201 09:54:19.386023 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6eb5c1-55f5-4bf4-a746-73fd82be6850" containerName="nova-scheduler-scheduler" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.386044 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6eb5c1-55f5-4bf4-a746-73fd82be6850" containerName="nova-scheduler-scheduler" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.386270 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6eb5c1-55f5-4bf4-a746-73fd82be6850" containerName="nova-scheduler-scheduler" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.387035 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.390672 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.405007 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.560702 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6c6446-7110-4007-9eed-f99541546756-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7a6c6446-7110-4007-9eed-f99541546756\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.561427 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9djqz\" (UniqueName: \"kubernetes.io/projected/7a6c6446-7110-4007-9eed-f99541546756-kube-api-access-9djqz\") pod \"nova-scheduler-0\" (UID: \"7a6c6446-7110-4007-9eed-f99541546756\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.561532 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6c6446-7110-4007-9eed-f99541546756-config-data\") pod \"nova-scheduler-0\" (UID: \"7a6c6446-7110-4007-9eed-f99541546756\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.664440 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9djqz\" (UniqueName: \"kubernetes.io/projected/7a6c6446-7110-4007-9eed-f99541546756-kube-api-access-9djqz\") pod \"nova-scheduler-0\" (UID: \"7a6c6446-7110-4007-9eed-f99541546756\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.664574 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6c6446-7110-4007-9eed-f99541546756-config-data\") pod \"nova-scheduler-0\" (UID: \"7a6c6446-7110-4007-9eed-f99541546756\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.664751 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6c6446-7110-4007-9eed-f99541546756-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7a6c6446-7110-4007-9eed-f99541546756\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.670782 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6c6446-7110-4007-9eed-f99541546756-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7a6c6446-7110-4007-9eed-f99541546756\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.671640 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6c6446-7110-4007-9eed-f99541546756-config-data\") pod \"nova-scheduler-0\" (UID: \"7a6c6446-7110-4007-9eed-f99541546756\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.689824 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9djqz\" (UniqueName: \"kubernetes.io/projected/7a6c6446-7110-4007-9eed-f99541546756-kube-api-access-9djqz\") pod \"nova-scheduler-0\" (UID: \"7a6c6446-7110-4007-9eed-f99541546756\") " pod="openstack/nova-scheduler-0" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.693080 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc6eb5c1-55f5-4bf4-a746-73fd82be6850" path="/var/lib/kubelet/pods/dc6eb5c1-55f5-4bf4-a746-73fd82be6850/volumes" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.718490 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.978145 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:54:19 crc kubenswrapper[4933]: I1201 09:54:19.978635 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:54:20 crc kubenswrapper[4933]: I1201 09:54:20.223910 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:54:20 crc kubenswrapper[4933]: W1201 09:54:20.228677 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a6c6446_7110_4007_9eed_f99541546756.slice/crio-af7e91f04aa6a2f612f2aea2f97d6486e56d1f6a8592e07e0ea0c5ab10fd04a1 WatchSource:0}: Error finding container af7e91f04aa6a2f612f2aea2f97d6486e56d1f6a8592e07e0ea0c5ab10fd04a1: Status 404 returned error can't find the container with id af7e91f04aa6a2f612f2aea2f97d6486e56d1f6a8592e07e0ea0c5ab10fd04a1 Dec 01 09:54:20 crc kubenswrapper[4933]: I1201 09:54:20.314833 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7a6c6446-7110-4007-9eed-f99541546756","Type":"ContainerStarted","Data":"af7e91f04aa6a2f612f2aea2f97d6486e56d1f6a8592e07e0ea0c5ab10fd04a1"} Dec 01 09:54:20 crc kubenswrapper[4933]: I1201 09:54:20.936922 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 09:54:20 crc kubenswrapper[4933]: I1201 09:54:20.937183 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c7de84cc-bb1a-45ba-bbba-acc140d0facc" containerName="kube-state-metrics" containerID="cri-o://af94f350651a42ea0900d9ce74a1f4c564b641f5865ec051136098b1db9a214a" gracePeriod=30 Dec 01 09:54:21 crc kubenswrapper[4933]: I1201 09:54:21.329927 4933 generic.go:334] "Generic (PLEG): container finished" podID="c7de84cc-bb1a-45ba-bbba-acc140d0facc" containerID="af94f350651a42ea0900d9ce74a1f4c564b641f5865ec051136098b1db9a214a" exitCode=2 Dec 01 09:54:21 crc kubenswrapper[4933]: I1201 09:54:21.330025 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c7de84cc-bb1a-45ba-bbba-acc140d0facc","Type":"ContainerDied","Data":"af94f350651a42ea0900d9ce74a1f4c564b641f5865ec051136098b1db9a214a"} Dec 01 09:54:21 crc kubenswrapper[4933]: I1201 09:54:21.333109 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7a6c6446-7110-4007-9eed-f99541546756","Type":"ContainerStarted","Data":"171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31"} Dec 01 09:54:21 crc kubenswrapper[4933]: I1201 09:54:21.502095 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 09:54:21 crc kubenswrapper[4933]: I1201 09:54:21.535973 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.535946703 podStartE2EDuration="2.535946703s" podCreationTimestamp="2025-12-01 09:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:54:21.35503691 +0000 UTC m=+1351.996760535" watchObservedRunningTime="2025-12-01 09:54:21.535946703 +0000 UTC m=+1352.177670318" Dec 01 09:54:21 crc kubenswrapper[4933]: I1201 09:54:21.608324 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2cp7\" (UniqueName: \"kubernetes.io/projected/c7de84cc-bb1a-45ba-bbba-acc140d0facc-kube-api-access-d2cp7\") pod \"c7de84cc-bb1a-45ba-bbba-acc140d0facc\" (UID: \"c7de84cc-bb1a-45ba-bbba-acc140d0facc\") " Dec 01 09:54:21 crc kubenswrapper[4933]: I1201 09:54:21.617503 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7de84cc-bb1a-45ba-bbba-acc140d0facc-kube-api-access-d2cp7" (OuterVolumeSpecName: "kube-api-access-d2cp7") pod "c7de84cc-bb1a-45ba-bbba-acc140d0facc" (UID: "c7de84cc-bb1a-45ba-bbba-acc140d0facc"). InnerVolumeSpecName "kube-api-access-d2cp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:21 crc kubenswrapper[4933]: I1201 09:54:21.711234 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2cp7\" (UniqueName: \"kubernetes.io/projected/c7de84cc-bb1a-45ba-bbba-acc140d0facc-kube-api-access-d2cp7\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.346431 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.347225 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c7de84cc-bb1a-45ba-bbba-acc140d0facc","Type":"ContainerDied","Data":"d5a696635a1f8cc07589a917e60ad35dca0dedbacefd5a370eed41947569a067"} Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.347274 4933 scope.go:117] "RemoveContainer" containerID="af94f350651a42ea0900d9ce74a1f4c564b641f5865ec051136098b1db9a214a" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.381573 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.398360 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.413369 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 09:54:22 crc kubenswrapper[4933]: E1201 09:54:22.414056 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7de84cc-bb1a-45ba-bbba-acc140d0facc" containerName="kube-state-metrics" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.414085 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7de84cc-bb1a-45ba-bbba-acc140d0facc" containerName="kube-state-metrics" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.414405 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7de84cc-bb1a-45ba-bbba-acc140d0facc" containerName="kube-state-metrics" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.415587 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.423558 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.423873 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.427524 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.528803 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/665307e0-fe7b-411a-b394-d383671c8809-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"665307e0-fe7b-411a-b394-d383671c8809\") " pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.528865 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/665307e0-fe7b-411a-b394-d383671c8809-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"665307e0-fe7b-411a-b394-d383671c8809\") " pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.528901 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmk7v\" (UniqueName: \"kubernetes.io/projected/665307e0-fe7b-411a-b394-d383671c8809-kube-api-access-fmk7v\") pod \"kube-state-metrics-0\" (UID: \"665307e0-fe7b-411a-b394-d383671c8809\") " pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.529268 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/665307e0-fe7b-411a-b394-d383671c8809-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"665307e0-fe7b-411a-b394-d383671c8809\") " pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.632192 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/665307e0-fe7b-411a-b394-d383671c8809-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"665307e0-fe7b-411a-b394-d383671c8809\") " pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.632281 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/665307e0-fe7b-411a-b394-d383671c8809-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"665307e0-fe7b-411a-b394-d383671c8809\") " pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.632354 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmk7v\" (UniqueName: \"kubernetes.io/projected/665307e0-fe7b-411a-b394-d383671c8809-kube-api-access-fmk7v\") pod \"kube-state-metrics-0\" (UID: \"665307e0-fe7b-411a-b394-d383671c8809\") " pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.632431 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/665307e0-fe7b-411a-b394-d383671c8809-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"665307e0-fe7b-411a-b394-d383671c8809\") " pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.639290 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/665307e0-fe7b-411a-b394-d383671c8809-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"665307e0-fe7b-411a-b394-d383671c8809\") " pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.639357 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/665307e0-fe7b-411a-b394-d383671c8809-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"665307e0-fe7b-411a-b394-d383671c8809\") " pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.642197 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/665307e0-fe7b-411a-b394-d383671c8809-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"665307e0-fe7b-411a-b394-d383671c8809\") " pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.660639 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmk7v\" (UniqueName: \"kubernetes.io/projected/665307e0-fe7b-411a-b394-d383671c8809-kube-api-access-fmk7v\") pod \"kube-state-metrics-0\" (UID: \"665307e0-fe7b-411a-b394-d383671c8809\") " pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.741227 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.994948 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.995846 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="ceilometer-central-agent" containerID="cri-o://3380c79a60d7c89c4524245ba20f321ac4b82a370ad9261786bf967b0913a7e5" gracePeriod=30 Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.996285 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="proxy-httpd" containerID="cri-o://da19e55062509f01cbc6d46bf589d3db4636e7ca146a1fb1e139f4516d53c366" gracePeriod=30 Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.996416 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="ceilometer-notification-agent" containerID="cri-o://1b7d3f91fd3ad09220d3635f245cc99964d038ea4a5c4b0e5a2f773cb2681ee1" gracePeriod=30 Dec 01 09:54:22 crc kubenswrapper[4933]: I1201 09:54:22.996495 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="sg-core" containerID="cri-o://967a949f1dff0ebf7c9c52a0b5132457859b181b6ad00003b0f0e651a664f7bb" gracePeriod=30 Dec 01 09:54:23 crc kubenswrapper[4933]: I1201 09:54:23.265068 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 09:54:23 crc kubenswrapper[4933]: W1201 09:54:23.269104 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod665307e0_fe7b_411a_b394_d383671c8809.slice/crio-70c71b2bd84033e0d24ae172b33d13a86fa083eeff7452fdd7652f11182e87b7 WatchSource:0}: Error finding container 70c71b2bd84033e0d24ae172b33d13a86fa083eeff7452fdd7652f11182e87b7: Status 404 returned error can't find the container with id 70c71b2bd84033e0d24ae172b33d13a86fa083eeff7452fdd7652f11182e87b7 Dec 01 09:54:23 crc kubenswrapper[4933]: I1201 09:54:23.365252 4933 generic.go:334] "Generic (PLEG): container finished" podID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerID="da19e55062509f01cbc6d46bf589d3db4636e7ca146a1fb1e139f4516d53c366" exitCode=0 Dec 01 09:54:23 crc kubenswrapper[4933]: I1201 09:54:23.365295 4933 generic.go:334] "Generic (PLEG): container finished" podID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerID="967a949f1dff0ebf7c9c52a0b5132457859b181b6ad00003b0f0e651a664f7bb" exitCode=2 Dec 01 09:54:23 crc kubenswrapper[4933]: I1201 09:54:23.365372 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09e8e0e-3ffb-4a17-8291-eafcf23617bf","Type":"ContainerDied","Data":"da19e55062509f01cbc6d46bf589d3db4636e7ca146a1fb1e139f4516d53c366"} Dec 01 09:54:23 crc kubenswrapper[4933]: I1201 09:54:23.365408 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09e8e0e-3ffb-4a17-8291-eafcf23617bf","Type":"ContainerDied","Data":"967a949f1dff0ebf7c9c52a0b5132457859b181b6ad00003b0f0e651a664f7bb"} Dec 01 09:54:23 crc kubenswrapper[4933]: I1201 09:54:23.370789 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"665307e0-fe7b-411a-b394-d383671c8809","Type":"ContainerStarted","Data":"70c71b2bd84033e0d24ae172b33d13a86fa083eeff7452fdd7652f11182e87b7"} Dec 01 09:54:23 crc kubenswrapper[4933]: I1201 09:54:23.681653 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7de84cc-bb1a-45ba-bbba-acc140d0facc" path="/var/lib/kubelet/pods/c7de84cc-bb1a-45ba-bbba-acc140d0facc/volumes" Dec 01 09:54:24 crc kubenswrapper[4933]: I1201 09:54:24.386279 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"665307e0-fe7b-411a-b394-d383671c8809","Type":"ContainerStarted","Data":"036f51b5fca059883899dfb0b02c027bee1ad57dbfb1282ee1b1eadf5d10932c"} Dec 01 09:54:24 crc kubenswrapper[4933]: I1201 09:54:24.386477 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 01 09:54:24 crc kubenswrapper[4933]: I1201 09:54:24.391496 4933 generic.go:334] "Generic (PLEG): container finished" podID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerID="3380c79a60d7c89c4524245ba20f321ac4b82a370ad9261786bf967b0913a7e5" exitCode=0 Dec 01 09:54:24 crc kubenswrapper[4933]: I1201 09:54:24.391576 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09e8e0e-3ffb-4a17-8291-eafcf23617bf","Type":"ContainerDied","Data":"3380c79a60d7c89c4524245ba20f321ac4b82a370ad9261786bf967b0913a7e5"} Dec 01 09:54:24 crc kubenswrapper[4933]: I1201 09:54:24.414886 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.067960019 podStartE2EDuration="2.414865851s" podCreationTimestamp="2025-12-01 09:54:22 +0000 UTC" firstStartedPulling="2025-12-01 09:54:23.272631743 +0000 UTC m=+1353.914355358" lastFinishedPulling="2025-12-01 09:54:23.619537585 +0000 UTC m=+1354.261261190" observedRunningTime="2025-12-01 09:54:24.412735739 +0000 UTC m=+1355.054459354" watchObservedRunningTime="2025-12-01 09:54:24.414865851 +0000 UTC m=+1355.056589466" Dec 01 09:54:24 crc kubenswrapper[4933]: I1201 09:54:24.719814 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 01 09:54:24 crc kubenswrapper[4933]: I1201 09:54:24.977506 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 09:54:24 crc kubenswrapper[4933]: I1201 09:54:24.977566 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.425876 4933 generic.go:334] "Generic (PLEG): container finished" podID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerID="1b7d3f91fd3ad09220d3635f245cc99964d038ea4a5c4b0e5a2f773cb2681ee1" exitCode=0 Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.428716 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09e8e0e-3ffb-4a17-8291-eafcf23617bf","Type":"ContainerDied","Data":"1b7d3f91fd3ad09220d3635f245cc99964d038ea4a5c4b0e5a2f773cb2681ee1"} Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.675355 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.821894 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-run-httpd\") pod \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.822014 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-scripts\") pod \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.822065 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-log-httpd\") pod \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.822104 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-config-data\") pod \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.822238 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf5gg\" (UniqueName: \"kubernetes.io/projected/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-kube-api-access-xf5gg\") pod \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.822296 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-combined-ca-bundle\") pod \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.822427 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-sg-core-conf-yaml\") pod \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\" (UID: \"a09e8e0e-3ffb-4a17-8291-eafcf23617bf\") " Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.823523 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a09e8e0e-3ffb-4a17-8291-eafcf23617bf" (UID: "a09e8e0e-3ffb-4a17-8291-eafcf23617bf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.825534 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a09e8e0e-3ffb-4a17-8291-eafcf23617bf" (UID: "a09e8e0e-3ffb-4a17-8291-eafcf23617bf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.833484 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-kube-api-access-xf5gg" (OuterVolumeSpecName: "kube-api-access-xf5gg") pod "a09e8e0e-3ffb-4a17-8291-eafcf23617bf" (UID: "a09e8e0e-3ffb-4a17-8291-eafcf23617bf"). InnerVolumeSpecName "kube-api-access-xf5gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.833759 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-scripts" (OuterVolumeSpecName: "scripts") pod "a09e8e0e-3ffb-4a17-8291-eafcf23617bf" (UID: "a09e8e0e-3ffb-4a17-8291-eafcf23617bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.871288 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a09e8e0e-3ffb-4a17-8291-eafcf23617bf" (UID: "a09e8e0e-3ffb-4a17-8291-eafcf23617bf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.926713 4933 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.926753 4933 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.926768 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.926779 4933 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.926795 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf5gg\" (UniqueName: \"kubernetes.io/projected/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-kube-api-access-xf5gg\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.931168 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a09e8e0e-3ffb-4a17-8291-eafcf23617bf" (UID: "a09e8e0e-3ffb-4a17-8291-eafcf23617bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.985683 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:54:25 crc kubenswrapper[4933]: I1201 09:54:25.986164 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.022857 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-config-data" (OuterVolumeSpecName: "config-data") pod "a09e8e0e-3ffb-4a17-8291-eafcf23617bf" (UID: "a09e8e0e-3ffb-4a17-8291-eafcf23617bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.028390 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.028421 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09e8e0e-3ffb-4a17-8291-eafcf23617bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.440769 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09e8e0e-3ffb-4a17-8291-eafcf23617bf","Type":"ContainerDied","Data":"50459de779d73acea0ff07833c62bdae6cfd0bd051bdb1411c5e08dfdf28ac79"} Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.440854 4933 scope.go:117] "RemoveContainer" containerID="da19e55062509f01cbc6d46bf589d3db4636e7ca146a1fb1e139f4516d53c366" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.441168 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.478772 4933 scope.go:117] "RemoveContainer" containerID="967a949f1dff0ebf7c9c52a0b5132457859b181b6ad00003b0f0e651a664f7bb" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.486617 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.496938 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="c7de84cc-bb1a-45ba-bbba-acc140d0facc" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.498910 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.534515 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:54:26 crc kubenswrapper[4933]: E1201 09:54:26.535526 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="proxy-httpd" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.535547 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="proxy-httpd" Dec 01 09:54:26 crc kubenswrapper[4933]: E1201 09:54:26.535562 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="ceilometer-central-agent" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.535568 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="ceilometer-central-agent" Dec 01 09:54:26 crc kubenswrapper[4933]: E1201 09:54:26.535590 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="ceilometer-notification-agent" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.535596 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="ceilometer-notification-agent" Dec 01 09:54:26 crc kubenswrapper[4933]: E1201 09:54:26.535624 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="sg-core" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.535630 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="sg-core" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.535837 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="sg-core" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.535851 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="ceilometer-notification-agent" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.535865 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="ceilometer-central-agent" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.535878 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" containerName="proxy-httpd" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.542295 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.548727 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.554657 4933 scope.go:117] "RemoveContainer" containerID="1b7d3f91fd3ad09220d3635f245cc99964d038ea4a5c4b0e5a2f773cb2681ee1" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.554805 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.554907 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.554979 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.601002 4933 scope.go:117] "RemoveContainer" containerID="3380c79a60d7c89c4524245ba20f321ac4b82a370ad9261786bf967b0913a7e5" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.644458 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.644600 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22dsn\" (UniqueName: \"kubernetes.io/projected/09aff114-dd0c-43db-8a31-1f1d105ac909-kube-api-access-22dsn\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.644709 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-scripts\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.645005 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-config-data\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.645217 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09aff114-dd0c-43db-8a31-1f1d105ac909-run-httpd\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.645244 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09aff114-dd0c-43db-8a31-1f1d105ac909-log-httpd\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.645292 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.645473 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.688984 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.748019 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.748089 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22dsn\" (UniqueName: \"kubernetes.io/projected/09aff114-dd0c-43db-8a31-1f1d105ac909-kube-api-access-22dsn\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.748133 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-scripts\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.748229 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-config-data\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.748317 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09aff114-dd0c-43db-8a31-1f1d105ac909-run-httpd\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.748342 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09aff114-dd0c-43db-8a31-1f1d105ac909-log-httpd\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.748374 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.748437 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.750196 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09aff114-dd0c-43db-8a31-1f1d105ac909-run-httpd\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.751574 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09aff114-dd0c-43db-8a31-1f1d105ac909-log-httpd\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.756670 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.757442 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.760539 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.763640 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-scripts\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.765981 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-config-data\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.775249 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22dsn\" (UniqueName: \"kubernetes.io/projected/09aff114-dd0c-43db-8a31-1f1d105ac909-kube-api-access-22dsn\") pod \"ceilometer-0\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " pod="openstack/ceilometer-0" Dec 01 09:54:26 crc kubenswrapper[4933]: I1201 09:54:26.881998 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:54:27 crc kubenswrapper[4933]: I1201 09:54:27.217280 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:54:27 crc kubenswrapper[4933]: I1201 09:54:27.454182 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09aff114-dd0c-43db-8a31-1f1d105ac909","Type":"ContainerStarted","Data":"7b2678935c67f20557d8345448dbae9e92ecbe902518c6d6b289ce31e67478a5"} Dec 01 09:54:27 crc kubenswrapper[4933]: I1201 09:54:27.684285 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a09e8e0e-3ffb-4a17-8291-eafcf23617bf" path="/var/lib/kubelet/pods/a09e8e0e-3ffb-4a17-8291-eafcf23617bf/volumes" Dec 01 09:54:27 crc kubenswrapper[4933]: I1201 09:54:27.725816 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:54:27 crc kubenswrapper[4933]: I1201 09:54:27.725928 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:54:28 crc kubenswrapper[4933]: I1201 09:54:28.474772 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09aff114-dd0c-43db-8a31-1f1d105ac909","Type":"ContainerStarted","Data":"188105522f3afa9329f09ac1f2cb4df4b3e7ddeec6a5b9c2d37212ea0e9efb00"} Dec 01 09:54:28 crc kubenswrapper[4933]: I1201 09:54:28.809718 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a99ae55e-670e-451b-8c67-85d537db7077" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:54:28 crc kubenswrapper[4933]: I1201 09:54:28.810187 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a99ae55e-670e-451b-8c67-85d537db7077" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:54:29 crc kubenswrapper[4933]: I1201 09:54:29.719776 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 01 09:54:29 crc kubenswrapper[4933]: I1201 09:54:29.766696 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 01 09:54:30 crc kubenswrapper[4933]: I1201 09:54:30.504606 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09aff114-dd0c-43db-8a31-1f1d105ac909","Type":"ContainerStarted","Data":"263861fa9f1f81dbb73be57d1d3b1d021ffa30c3a91e7fad73787704908193ef"} Dec 01 09:54:30 crc kubenswrapper[4933]: I1201 09:54:30.505108 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09aff114-dd0c-43db-8a31-1f1d105ac909","Type":"ContainerStarted","Data":"3746d9cd1f34f44122a6f557146e9dff19f23cd5c02b626bb539d4b57eb8a1b8"} Dec 01 09:54:30 crc kubenswrapper[4933]: I1201 09:54:30.540929 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 01 09:54:32 crc kubenswrapper[4933]: I1201 09:54:32.542867 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09aff114-dd0c-43db-8a31-1f1d105ac909","Type":"ContainerStarted","Data":"51ee6d1e1d227a6f0575d2703377e241b4f26575dbd892b1707aef745516669e"} Dec 01 09:54:32 crc kubenswrapper[4933]: I1201 09:54:32.543972 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 09:54:32 crc kubenswrapper[4933]: I1201 09:54:32.576189 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.317850248 podStartE2EDuration="6.576156271s" podCreationTimestamp="2025-12-01 09:54:26 +0000 UTC" firstStartedPulling="2025-12-01 09:54:27.229722404 +0000 UTC m=+1357.871446019" lastFinishedPulling="2025-12-01 09:54:31.488028427 +0000 UTC m=+1362.129752042" observedRunningTime="2025-12-01 09:54:32.570662837 +0000 UTC m=+1363.212386472" watchObservedRunningTime="2025-12-01 09:54:32.576156271 +0000 UTC m=+1363.217879886" Dec 01 09:54:32 crc kubenswrapper[4933]: I1201 09:54:32.761043 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 01 09:54:34 crc kubenswrapper[4933]: I1201 09:54:34.987031 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 09:54:34 crc kubenswrapper[4933]: I1201 09:54:34.989164 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 09:54:34 crc kubenswrapper[4933]: I1201 09:54:34.994230 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 09:54:35 crc kubenswrapper[4933]: I1201 09:54:35.593962 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.563478 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.615325 4933 generic.go:334] "Generic (PLEG): container finished" podID="caa4bf9b-bdd7-4f39-aead-94858835d5f1" containerID="b47f736cbaa01b4cdaff4ef2cd5c7d3df2cf9f7e71a9b717aa781ef2b6737d00" exitCode=137 Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.615408 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.616363 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"caa4bf9b-bdd7-4f39-aead-94858835d5f1","Type":"ContainerDied","Data":"b47f736cbaa01b4cdaff4ef2cd5c7d3df2cf9f7e71a9b717aa781ef2b6737d00"} Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.616403 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"caa4bf9b-bdd7-4f39-aead-94858835d5f1","Type":"ContainerDied","Data":"574a422b58bdc2a45f63aeab791c0979d409ddb7327d2cb4844c3049cc573a9d"} Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.616426 4933 scope.go:117] "RemoveContainer" containerID="b47f736cbaa01b4cdaff4ef2cd5c7d3df2cf9f7e71a9b717aa781ef2b6737d00" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.646024 4933 scope.go:117] "RemoveContainer" containerID="b47f736cbaa01b4cdaff4ef2cd5c7d3df2cf9f7e71a9b717aa781ef2b6737d00" Dec 01 09:54:37 crc kubenswrapper[4933]: E1201 09:54:37.648643 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b47f736cbaa01b4cdaff4ef2cd5c7d3df2cf9f7e71a9b717aa781ef2b6737d00\": container with ID starting with b47f736cbaa01b4cdaff4ef2cd5c7d3df2cf9f7e71a9b717aa781ef2b6737d00 not found: ID does not exist" containerID="b47f736cbaa01b4cdaff4ef2cd5c7d3df2cf9f7e71a9b717aa781ef2b6737d00" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.648711 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47f736cbaa01b4cdaff4ef2cd5c7d3df2cf9f7e71a9b717aa781ef2b6737d00"} err="failed to get container status \"b47f736cbaa01b4cdaff4ef2cd5c7d3df2cf9f7e71a9b717aa781ef2b6737d00\": rpc error: code = NotFound desc = could not find container \"b47f736cbaa01b4cdaff4ef2cd5c7d3df2cf9f7e71a9b717aa781ef2b6737d00\": container with ID starting with b47f736cbaa01b4cdaff4ef2cd5c7d3df2cf9f7e71a9b717aa781ef2b6737d00 not found: ID does not exist" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.661529 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qptzs\" (UniqueName: \"kubernetes.io/projected/caa4bf9b-bdd7-4f39-aead-94858835d5f1-kube-api-access-qptzs\") pod \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\" (UID: \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\") " Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.661909 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa4bf9b-bdd7-4f39-aead-94858835d5f1-combined-ca-bundle\") pod \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\" (UID: \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\") " Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.661977 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caa4bf9b-bdd7-4f39-aead-94858835d5f1-config-data\") pod \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\" (UID: \"caa4bf9b-bdd7-4f39-aead-94858835d5f1\") " Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.676800 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa4bf9b-bdd7-4f39-aead-94858835d5f1-kube-api-access-qptzs" (OuterVolumeSpecName: "kube-api-access-qptzs") pod "caa4bf9b-bdd7-4f39-aead-94858835d5f1" (UID: "caa4bf9b-bdd7-4f39-aead-94858835d5f1"). InnerVolumeSpecName "kube-api-access-qptzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.710576 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa4bf9b-bdd7-4f39-aead-94858835d5f1-config-data" (OuterVolumeSpecName: "config-data") pod "caa4bf9b-bdd7-4f39-aead-94858835d5f1" (UID: "caa4bf9b-bdd7-4f39-aead-94858835d5f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.711799 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa4bf9b-bdd7-4f39-aead-94858835d5f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caa4bf9b-bdd7-4f39-aead-94858835d5f1" (UID: "caa4bf9b-bdd7-4f39-aead-94858835d5f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.765719 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa4bf9b-bdd7-4f39-aead-94858835d5f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.765804 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caa4bf9b-bdd7-4f39-aead-94858835d5f1-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.765817 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qptzs\" (UniqueName: \"kubernetes.io/projected/caa4bf9b-bdd7-4f39-aead-94858835d5f1-kube-api-access-qptzs\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.770372 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.770442 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.772424 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.772454 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.777175 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.779657 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.968399 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:54:37 crc kubenswrapper[4933]: I1201 09:54:37.986394 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.023409 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:54:38 crc kubenswrapper[4933]: E1201 09:54:38.024077 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa4bf9b-bdd7-4f39-aead-94858835d5f1" containerName="nova-cell1-novncproxy-novncproxy" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.024098 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa4bf9b-bdd7-4f39-aead-94858835d5f1" containerName="nova-cell1-novncproxy-novncproxy" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.024396 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa4bf9b-bdd7-4f39-aead-94858835d5f1" containerName="nova-cell1-novncproxy-novncproxy" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.025399 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.033138 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.033468 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.033607 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.047225 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.072365 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-vt8jx"] Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.074980 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.097904 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-vt8jx"] Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.183071 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c08ac8e-6639-413d-8534-625fa6adc9ae-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.183587 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.183688 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.183829 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v78zt\" (UniqueName: \"kubernetes.io/projected/6c08ac8e-6639-413d-8534-625fa6adc9ae-kube-api-access-v78zt\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.183959 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.184059 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c08ac8e-6639-413d-8534-625fa6adc9ae-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.184200 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c08ac8e-6639-413d-8534-625fa6adc9ae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.184287 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.184721 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w46l\" (UniqueName: \"kubernetes.io/projected/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-kube-api-access-4w46l\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.184827 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-config\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.185040 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c08ac8e-6639-413d-8534-625fa6adc9ae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.287205 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c08ac8e-6639-413d-8534-625fa6adc9ae-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.287291 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.287355 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.287400 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v78zt\" (UniqueName: \"kubernetes.io/projected/6c08ac8e-6639-413d-8534-625fa6adc9ae-kube-api-access-v78zt\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.287440 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.287471 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c08ac8e-6639-413d-8534-625fa6adc9ae-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.287520 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c08ac8e-6639-413d-8534-625fa6adc9ae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.287539 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.287581 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w46l\" (UniqueName: \"kubernetes.io/projected/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-kube-api-access-4w46l\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.287599 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-config\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.287644 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c08ac8e-6639-413d-8534-625fa6adc9ae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.288584 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.289465 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.289469 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.291456 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-config\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.294782 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.298169 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c08ac8e-6639-413d-8534-625fa6adc9ae-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.298238 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c08ac8e-6639-413d-8534-625fa6adc9ae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.299083 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c08ac8e-6639-413d-8534-625fa6adc9ae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.303941 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c08ac8e-6639-413d-8534-625fa6adc9ae-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.313265 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w46l\" (UniqueName: \"kubernetes.io/projected/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-kube-api-access-4w46l\") pod \"dnsmasq-dns-89c5cd4d5-vt8jx\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.316093 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v78zt\" (UniqueName: \"kubernetes.io/projected/6c08ac8e-6639-413d-8534-625fa6adc9ae-kube-api-access-v78zt\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c08ac8e-6639-413d-8534-625fa6adc9ae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.356960 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.417199 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:38 crc kubenswrapper[4933]: I1201 09:54:38.927294 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:54:38 crc kubenswrapper[4933]: W1201 09:54:38.928733 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c08ac8e_6639_413d_8534_625fa6adc9ae.slice/crio-6dadb5b233a7ecfa0728ca30b870b9602082ff2cb10af129d4d1e073e539c588 WatchSource:0}: Error finding container 6dadb5b233a7ecfa0728ca30b870b9602082ff2cb10af129d4d1e073e539c588: Status 404 returned error can't find the container with id 6dadb5b233a7ecfa0728ca30b870b9602082ff2cb10af129d4d1e073e539c588 Dec 01 09:54:39 crc kubenswrapper[4933]: I1201 09:54:39.050858 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-vt8jx"] Dec 01 09:54:39 crc kubenswrapper[4933]: W1201 09:54:39.064378 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2c9ef2a_a283_4e71_8d32_03bcc513b6e8.slice/crio-3d315677482ce0ce2c9d3803ca037d68867533e9fac4f2876206e1bb6e07d7d4 WatchSource:0}: Error finding container 3d315677482ce0ce2c9d3803ca037d68867533e9fac4f2876206e1bb6e07d7d4: Status 404 returned error can't find the container with id 3d315677482ce0ce2c9d3803ca037d68867533e9fac4f2876206e1bb6e07d7d4 Dec 01 09:54:39 crc kubenswrapper[4933]: I1201 09:54:39.645887 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6c08ac8e-6639-413d-8534-625fa6adc9ae","Type":"ContainerStarted","Data":"b3bfcc2faa0908d255b02141f23ee1759b3e4f16e66001e6d284c4bd1a2df004"} Dec 01 09:54:39 crc kubenswrapper[4933]: I1201 09:54:39.646400 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6c08ac8e-6639-413d-8534-625fa6adc9ae","Type":"ContainerStarted","Data":"6dadb5b233a7ecfa0728ca30b870b9602082ff2cb10af129d4d1e073e539c588"} Dec 01 09:54:39 crc kubenswrapper[4933]: I1201 09:54:39.650146 4933 generic.go:334] "Generic (PLEG): container finished" podID="d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" containerID="f23f8c88c27f25454d94dda25b2e71d9d1e17c174df760b9a08a4f282d143e6b" exitCode=0 Dec 01 09:54:39 crc kubenswrapper[4933]: I1201 09:54:39.650216 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" event={"ID":"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8","Type":"ContainerDied","Data":"f23f8c88c27f25454d94dda25b2e71d9d1e17c174df760b9a08a4f282d143e6b"} Dec 01 09:54:39 crc kubenswrapper[4933]: I1201 09:54:39.650332 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" event={"ID":"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8","Type":"ContainerStarted","Data":"3d315677482ce0ce2c9d3803ca037d68867533e9fac4f2876206e1bb6e07d7d4"} Dec 01 09:54:39 crc kubenswrapper[4933]: I1201 09:54:39.673106 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.673075309 podStartE2EDuration="2.673075309s" podCreationTimestamp="2025-12-01 09:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:54:39.665640737 +0000 UTC m=+1370.307364342" watchObservedRunningTime="2025-12-01 09:54:39.673075309 +0000 UTC m=+1370.314798924" Dec 01 09:54:39 crc kubenswrapper[4933]: I1201 09:54:39.693817 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caa4bf9b-bdd7-4f39-aead-94858835d5f1" path="/var/lib/kubelet/pods/caa4bf9b-bdd7-4f39-aead-94858835d5f1/volumes" Dec 01 09:54:40 crc kubenswrapper[4933]: I1201 09:54:40.670803 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" event={"ID":"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8","Type":"ContainerStarted","Data":"7dda46422342fa46621cbb66b446a819140f58c7205d495eddf9114f4bad4578"} Dec 01 09:54:40 crc kubenswrapper[4933]: I1201 09:54:40.696766 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" podStartSLOduration=3.696735547 podStartE2EDuration="3.696735547s" podCreationTimestamp="2025-12-01 09:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:54:40.692069323 +0000 UTC m=+1371.333792938" watchObservedRunningTime="2025-12-01 09:54:40.696735547 +0000 UTC m=+1371.338459172" Dec 01 09:54:40 crc kubenswrapper[4933]: I1201 09:54:40.811921 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:40 crc kubenswrapper[4933]: I1201 09:54:40.812613 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a99ae55e-670e-451b-8c67-85d537db7077" containerName="nova-api-log" containerID="cri-o://198a8e438ab377bad55d3001d83a73b7a40f7286e35ee2f4d5af46d26825eece" gracePeriod=30 Dec 01 09:54:40 crc kubenswrapper[4933]: I1201 09:54:40.813481 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a99ae55e-670e-451b-8c67-85d537db7077" containerName="nova-api-api" containerID="cri-o://9d66eff247bef5885648e8753d25f82dc25ff7240f6f0c5f7e698fd0a14461bd" gracePeriod=30 Dec 01 09:54:41 crc kubenswrapper[4933]: I1201 09:54:41.687461 4933 generic.go:334] "Generic (PLEG): container finished" podID="a99ae55e-670e-451b-8c67-85d537db7077" containerID="198a8e438ab377bad55d3001d83a73b7a40f7286e35ee2f4d5af46d26825eece" exitCode=143 Dec 01 09:54:41 crc kubenswrapper[4933]: I1201 09:54:41.687520 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a99ae55e-670e-451b-8c67-85d537db7077","Type":"ContainerDied","Data":"198a8e438ab377bad55d3001d83a73b7a40f7286e35ee2f4d5af46d26825eece"} Dec 01 09:54:41 crc kubenswrapper[4933]: I1201 09:54:41.688538 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:41 crc kubenswrapper[4933]: I1201 09:54:41.887118 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:54:41 crc kubenswrapper[4933]: I1201 09:54:41.887724 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="ceilometer-central-agent" containerID="cri-o://188105522f3afa9329f09ac1f2cb4df4b3e7ddeec6a5b9c2d37212ea0e9efb00" gracePeriod=30 Dec 01 09:54:41 crc kubenswrapper[4933]: I1201 09:54:41.888543 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="proxy-httpd" containerID="cri-o://51ee6d1e1d227a6f0575d2703377e241b4f26575dbd892b1707aef745516669e" gracePeriod=30 Dec 01 09:54:41 crc kubenswrapper[4933]: I1201 09:54:41.888650 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="sg-core" containerID="cri-o://263861fa9f1f81dbb73be57d1d3b1d021ffa30c3a91e7fad73787704908193ef" gracePeriod=30 Dec 01 09:54:41 crc kubenswrapper[4933]: I1201 09:54:41.888737 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="ceilometer-notification-agent" containerID="cri-o://3746d9cd1f34f44122a6f557146e9dff19f23cd5c02b626bb539d4b57eb8a1b8" gracePeriod=30 Dec 01 09:54:41 crc kubenswrapper[4933]: I1201 09:54:41.899473 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.196:3000/\": read tcp 10.217.0.2:35486->10.217.0.196:3000: read: connection reset by peer" Dec 01 09:54:42 crc kubenswrapper[4933]: I1201 09:54:42.702521 4933 generic.go:334] "Generic (PLEG): container finished" podID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerID="51ee6d1e1d227a6f0575d2703377e241b4f26575dbd892b1707aef745516669e" exitCode=0 Dec 01 09:54:42 crc kubenswrapper[4933]: I1201 09:54:42.702575 4933 generic.go:334] "Generic (PLEG): container finished" podID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerID="263861fa9f1f81dbb73be57d1d3b1d021ffa30c3a91e7fad73787704908193ef" exitCode=2 Dec 01 09:54:42 crc kubenswrapper[4933]: I1201 09:54:42.702584 4933 generic.go:334] "Generic (PLEG): container finished" podID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerID="188105522f3afa9329f09ac1f2cb4df4b3e7ddeec6a5b9c2d37212ea0e9efb00" exitCode=0 Dec 01 09:54:42 crc kubenswrapper[4933]: I1201 09:54:42.702618 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09aff114-dd0c-43db-8a31-1f1d105ac909","Type":"ContainerDied","Data":"51ee6d1e1d227a6f0575d2703377e241b4f26575dbd892b1707aef745516669e"} Dec 01 09:54:42 crc kubenswrapper[4933]: I1201 09:54:42.702708 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09aff114-dd0c-43db-8a31-1f1d105ac909","Type":"ContainerDied","Data":"263861fa9f1f81dbb73be57d1d3b1d021ffa30c3a91e7fad73787704908193ef"} Dec 01 09:54:42 crc kubenswrapper[4933]: I1201 09:54:42.702720 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09aff114-dd0c-43db-8a31-1f1d105ac909","Type":"ContainerDied","Data":"188105522f3afa9329f09ac1f2cb4df4b3e7ddeec6a5b9c2d37212ea0e9efb00"} Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.357880 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.574764 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.720146 4933 generic.go:334] "Generic (PLEG): container finished" podID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerID="3746d9cd1f34f44122a6f557146e9dff19f23cd5c02b626bb539d4b57eb8a1b8" exitCode=0 Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.720231 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09aff114-dd0c-43db-8a31-1f1d105ac909","Type":"ContainerDied","Data":"3746d9cd1f34f44122a6f557146e9dff19f23cd5c02b626bb539d4b57eb8a1b8"} Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.720297 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09aff114-dd0c-43db-8a31-1f1d105ac909","Type":"ContainerDied","Data":"7b2678935c67f20557d8345448dbae9e92ecbe902518c6d6b289ce31e67478a5"} Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.720352 4933 scope.go:117] "RemoveContainer" containerID="51ee6d1e1d227a6f0575d2703377e241b4f26575dbd892b1707aef745516669e" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.721455 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.728380 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22dsn\" (UniqueName: \"kubernetes.io/projected/09aff114-dd0c-43db-8a31-1f1d105ac909-kube-api-access-22dsn\") pod \"09aff114-dd0c-43db-8a31-1f1d105ac909\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.728491 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-sg-core-conf-yaml\") pod \"09aff114-dd0c-43db-8a31-1f1d105ac909\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.728626 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-config-data\") pod \"09aff114-dd0c-43db-8a31-1f1d105ac909\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.728669 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-ceilometer-tls-certs\") pod \"09aff114-dd0c-43db-8a31-1f1d105ac909\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.728698 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-combined-ca-bundle\") pod \"09aff114-dd0c-43db-8a31-1f1d105ac909\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.728742 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-scripts\") pod \"09aff114-dd0c-43db-8a31-1f1d105ac909\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.728910 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09aff114-dd0c-43db-8a31-1f1d105ac909-log-httpd\") pod \"09aff114-dd0c-43db-8a31-1f1d105ac909\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.728950 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09aff114-dd0c-43db-8a31-1f1d105ac909-run-httpd\") pod \"09aff114-dd0c-43db-8a31-1f1d105ac909\" (UID: \"09aff114-dd0c-43db-8a31-1f1d105ac909\") " Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.729839 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09aff114-dd0c-43db-8a31-1f1d105ac909-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "09aff114-dd0c-43db-8a31-1f1d105ac909" (UID: "09aff114-dd0c-43db-8a31-1f1d105ac909"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.730057 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09aff114-dd0c-43db-8a31-1f1d105ac909-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "09aff114-dd0c-43db-8a31-1f1d105ac909" (UID: "09aff114-dd0c-43db-8a31-1f1d105ac909"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.737282 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09aff114-dd0c-43db-8a31-1f1d105ac909-kube-api-access-22dsn" (OuterVolumeSpecName: "kube-api-access-22dsn") pod "09aff114-dd0c-43db-8a31-1f1d105ac909" (UID: "09aff114-dd0c-43db-8a31-1f1d105ac909"). InnerVolumeSpecName "kube-api-access-22dsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.740613 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-scripts" (OuterVolumeSpecName: "scripts") pod "09aff114-dd0c-43db-8a31-1f1d105ac909" (UID: "09aff114-dd0c-43db-8a31-1f1d105ac909"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.751940 4933 scope.go:117] "RemoveContainer" containerID="263861fa9f1f81dbb73be57d1d3b1d021ffa30c3a91e7fad73787704908193ef" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.767101 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "09aff114-dd0c-43db-8a31-1f1d105ac909" (UID: "09aff114-dd0c-43db-8a31-1f1d105ac909"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.792353 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "09aff114-dd0c-43db-8a31-1f1d105ac909" (UID: "09aff114-dd0c-43db-8a31-1f1d105ac909"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.820081 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09aff114-dd0c-43db-8a31-1f1d105ac909" (UID: "09aff114-dd0c-43db-8a31-1f1d105ac909"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.834621 4933 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.834716 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.834733 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.834770 4933 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09aff114-dd0c-43db-8a31-1f1d105ac909-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.834790 4933 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09aff114-dd0c-43db-8a31-1f1d105ac909-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.834805 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22dsn\" (UniqueName: \"kubernetes.io/projected/09aff114-dd0c-43db-8a31-1f1d105ac909-kube-api-access-22dsn\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.834820 4933 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.849452 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-config-data" (OuterVolumeSpecName: "config-data") pod "09aff114-dd0c-43db-8a31-1f1d105ac909" (UID: "09aff114-dd0c-43db-8a31-1f1d105ac909"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.899375 4933 scope.go:117] "RemoveContainer" containerID="3746d9cd1f34f44122a6f557146e9dff19f23cd5c02b626bb539d4b57eb8a1b8" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.926651 4933 scope.go:117] "RemoveContainer" containerID="188105522f3afa9329f09ac1f2cb4df4b3e7ddeec6a5b9c2d37212ea0e9efb00" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.938146 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09aff114-dd0c-43db-8a31-1f1d105ac909-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.955951 4933 scope.go:117] "RemoveContainer" containerID="51ee6d1e1d227a6f0575d2703377e241b4f26575dbd892b1707aef745516669e" Dec 01 09:54:43 crc kubenswrapper[4933]: E1201 09:54:43.956553 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51ee6d1e1d227a6f0575d2703377e241b4f26575dbd892b1707aef745516669e\": container with ID starting with 51ee6d1e1d227a6f0575d2703377e241b4f26575dbd892b1707aef745516669e not found: ID does not exist" containerID="51ee6d1e1d227a6f0575d2703377e241b4f26575dbd892b1707aef745516669e" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.956592 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51ee6d1e1d227a6f0575d2703377e241b4f26575dbd892b1707aef745516669e"} err="failed to get container status \"51ee6d1e1d227a6f0575d2703377e241b4f26575dbd892b1707aef745516669e\": rpc error: code = NotFound desc = could not find container \"51ee6d1e1d227a6f0575d2703377e241b4f26575dbd892b1707aef745516669e\": container with ID starting with 51ee6d1e1d227a6f0575d2703377e241b4f26575dbd892b1707aef745516669e not found: ID does not exist" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.956618 4933 scope.go:117] "RemoveContainer" containerID="263861fa9f1f81dbb73be57d1d3b1d021ffa30c3a91e7fad73787704908193ef" Dec 01 09:54:43 crc kubenswrapper[4933]: E1201 09:54:43.957107 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"263861fa9f1f81dbb73be57d1d3b1d021ffa30c3a91e7fad73787704908193ef\": container with ID starting with 263861fa9f1f81dbb73be57d1d3b1d021ffa30c3a91e7fad73787704908193ef not found: ID does not exist" containerID="263861fa9f1f81dbb73be57d1d3b1d021ffa30c3a91e7fad73787704908193ef" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.957169 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"263861fa9f1f81dbb73be57d1d3b1d021ffa30c3a91e7fad73787704908193ef"} err="failed to get container status \"263861fa9f1f81dbb73be57d1d3b1d021ffa30c3a91e7fad73787704908193ef\": rpc error: code = NotFound desc = could not find container \"263861fa9f1f81dbb73be57d1d3b1d021ffa30c3a91e7fad73787704908193ef\": container with ID starting with 263861fa9f1f81dbb73be57d1d3b1d021ffa30c3a91e7fad73787704908193ef not found: ID does not exist" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.957212 4933 scope.go:117] "RemoveContainer" containerID="3746d9cd1f34f44122a6f557146e9dff19f23cd5c02b626bb539d4b57eb8a1b8" Dec 01 09:54:43 crc kubenswrapper[4933]: E1201 09:54:43.957955 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3746d9cd1f34f44122a6f557146e9dff19f23cd5c02b626bb539d4b57eb8a1b8\": container with ID starting with 3746d9cd1f34f44122a6f557146e9dff19f23cd5c02b626bb539d4b57eb8a1b8 not found: ID does not exist" containerID="3746d9cd1f34f44122a6f557146e9dff19f23cd5c02b626bb539d4b57eb8a1b8" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.957997 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3746d9cd1f34f44122a6f557146e9dff19f23cd5c02b626bb539d4b57eb8a1b8"} err="failed to get container status \"3746d9cd1f34f44122a6f557146e9dff19f23cd5c02b626bb539d4b57eb8a1b8\": rpc error: code = NotFound desc = could not find container \"3746d9cd1f34f44122a6f557146e9dff19f23cd5c02b626bb539d4b57eb8a1b8\": container with ID starting with 3746d9cd1f34f44122a6f557146e9dff19f23cd5c02b626bb539d4b57eb8a1b8 not found: ID does not exist" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.958029 4933 scope.go:117] "RemoveContainer" containerID="188105522f3afa9329f09ac1f2cb4df4b3e7ddeec6a5b9c2d37212ea0e9efb00" Dec 01 09:54:43 crc kubenswrapper[4933]: E1201 09:54:43.958442 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"188105522f3afa9329f09ac1f2cb4df4b3e7ddeec6a5b9c2d37212ea0e9efb00\": container with ID starting with 188105522f3afa9329f09ac1f2cb4df4b3e7ddeec6a5b9c2d37212ea0e9efb00 not found: ID does not exist" containerID="188105522f3afa9329f09ac1f2cb4df4b3e7ddeec6a5b9c2d37212ea0e9efb00" Dec 01 09:54:43 crc kubenswrapper[4933]: I1201 09:54:43.958481 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"188105522f3afa9329f09ac1f2cb4df4b3e7ddeec6a5b9c2d37212ea0e9efb00"} err="failed to get container status \"188105522f3afa9329f09ac1f2cb4df4b3e7ddeec6a5b9c2d37212ea0e9efb00\": rpc error: code = NotFound desc = could not find container \"188105522f3afa9329f09ac1f2cb4df4b3e7ddeec6a5b9c2d37212ea0e9efb00\": container with ID starting with 188105522f3afa9329f09ac1f2cb4df4b3e7ddeec6a5b9c2d37212ea0e9efb00 not found: ID does not exist" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.082676 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.100869 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.114735 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:54:44 crc kubenswrapper[4933]: E1201 09:54:44.115293 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="ceilometer-notification-agent" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.115321 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="ceilometer-notification-agent" Dec 01 09:54:44 crc kubenswrapper[4933]: E1201 09:54:44.115337 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="sg-core" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.115344 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="sg-core" Dec 01 09:54:44 crc kubenswrapper[4933]: E1201 09:54:44.115368 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="ceilometer-central-agent" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.115375 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="ceilometer-central-agent" Dec 01 09:54:44 crc kubenswrapper[4933]: E1201 09:54:44.115415 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="proxy-httpd" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.115422 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="proxy-httpd" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.115622 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="sg-core" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.115635 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="proxy-httpd" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.115654 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="ceilometer-central-agent" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.115661 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" containerName="ceilometer-notification-agent" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.118022 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.126501 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.126923 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.127947 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.128894 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.248795 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-config-data\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.248866 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.249070 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-scripts\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.249163 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-759rs\" (UniqueName: \"kubernetes.io/projected/77bda02c-44dc-4643-b6d4-4d9f32b260cb-kube-api-access-759rs\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.249216 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77bda02c-44dc-4643-b6d4-4d9f32b260cb-log-httpd\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.249247 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77bda02c-44dc-4643-b6d4-4d9f32b260cb-run-httpd\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.249284 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.249384 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.352274 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-config-data\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.352512 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.352887 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-scripts\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.353084 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-759rs\" (UniqueName: \"kubernetes.io/projected/77bda02c-44dc-4643-b6d4-4d9f32b260cb-kube-api-access-759rs\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.353212 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77bda02c-44dc-4643-b6d4-4d9f32b260cb-log-httpd\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.353260 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77bda02c-44dc-4643-b6d4-4d9f32b260cb-run-httpd\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.353345 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.353411 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.359022 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77bda02c-44dc-4643-b6d4-4d9f32b260cb-log-httpd\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.359128 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77bda02c-44dc-4643-b6d4-4d9f32b260cb-run-httpd\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.359546 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.360474 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.363212 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.364242 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-config-data\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.367791 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77bda02c-44dc-4643-b6d4-4d9f32b260cb-scripts\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.382003 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-759rs\" (UniqueName: \"kubernetes.io/projected/77bda02c-44dc-4643-b6d4-4d9f32b260cb-kube-api-access-759rs\") pod \"ceilometer-0\" (UID: \"77bda02c-44dc-4643-b6d4-4d9f32b260cb\") " pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.469080 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.484487 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.661673 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbkkb\" (UniqueName: \"kubernetes.io/projected/a99ae55e-670e-451b-8c67-85d537db7077-kube-api-access-xbkkb\") pod \"a99ae55e-670e-451b-8c67-85d537db7077\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.662176 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a99ae55e-670e-451b-8c67-85d537db7077-config-data\") pod \"a99ae55e-670e-451b-8c67-85d537db7077\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.662205 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a99ae55e-670e-451b-8c67-85d537db7077-logs\") pod \"a99ae55e-670e-451b-8c67-85d537db7077\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.662242 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99ae55e-670e-451b-8c67-85d537db7077-combined-ca-bundle\") pod \"a99ae55e-670e-451b-8c67-85d537db7077\" (UID: \"a99ae55e-670e-451b-8c67-85d537db7077\") " Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.664790 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a99ae55e-670e-451b-8c67-85d537db7077-logs" (OuterVolumeSpecName: "logs") pod "a99ae55e-670e-451b-8c67-85d537db7077" (UID: "a99ae55e-670e-451b-8c67-85d537db7077"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.665452 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a99ae55e-670e-451b-8c67-85d537db7077-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.673888 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a99ae55e-670e-451b-8c67-85d537db7077-kube-api-access-xbkkb" (OuterVolumeSpecName: "kube-api-access-xbkkb") pod "a99ae55e-670e-451b-8c67-85d537db7077" (UID: "a99ae55e-670e-451b-8c67-85d537db7077"). InnerVolumeSpecName "kube-api-access-xbkkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.719374 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a99ae55e-670e-451b-8c67-85d537db7077-config-data" (OuterVolumeSpecName: "config-data") pod "a99ae55e-670e-451b-8c67-85d537db7077" (UID: "a99ae55e-670e-451b-8c67-85d537db7077"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.721990 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a99ae55e-670e-451b-8c67-85d537db7077-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a99ae55e-670e-451b-8c67-85d537db7077" (UID: "a99ae55e-670e-451b-8c67-85d537db7077"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.769065 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbkkb\" (UniqueName: \"kubernetes.io/projected/a99ae55e-670e-451b-8c67-85d537db7077-kube-api-access-xbkkb\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.769108 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a99ae55e-670e-451b-8c67-85d537db7077-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.769121 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99ae55e-670e-451b-8c67-85d537db7077-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.775329 4933 generic.go:334] "Generic (PLEG): container finished" podID="a99ae55e-670e-451b-8c67-85d537db7077" containerID="9d66eff247bef5885648e8753d25f82dc25ff7240f6f0c5f7e698fd0a14461bd" exitCode=0 Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.775390 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a99ae55e-670e-451b-8c67-85d537db7077","Type":"ContainerDied","Data":"9d66eff247bef5885648e8753d25f82dc25ff7240f6f0c5f7e698fd0a14461bd"} Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.775421 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.775476 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a99ae55e-670e-451b-8c67-85d537db7077","Type":"ContainerDied","Data":"6fe8fbffa750b32136c104132aff04628b3cdbe94823a774f14af8057a2570f1"} Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.775535 4933 scope.go:117] "RemoveContainer" containerID="9d66eff247bef5885648e8753d25f82dc25ff7240f6f0c5f7e698fd0a14461bd" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.833380 4933 scope.go:117] "RemoveContainer" containerID="198a8e438ab377bad55d3001d83a73b7a40f7286e35ee2f4d5af46d26825eece" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.855322 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.873560 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.893454 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:44 crc kubenswrapper[4933]: E1201 09:54:44.894025 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99ae55e-670e-451b-8c67-85d537db7077" containerName="nova-api-api" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.894057 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99ae55e-670e-451b-8c67-85d537db7077" containerName="nova-api-api" Dec 01 09:54:44 crc kubenswrapper[4933]: E1201 09:54:44.894122 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99ae55e-670e-451b-8c67-85d537db7077" containerName="nova-api-log" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.894130 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99ae55e-670e-451b-8c67-85d537db7077" containerName="nova-api-log" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.894322 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="a99ae55e-670e-451b-8c67-85d537db7077" containerName="nova-api-api" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.894339 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="a99ae55e-670e-451b-8c67-85d537db7077" containerName="nova-api-log" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.895680 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.897823 4933 scope.go:117] "RemoveContainer" containerID="9d66eff247bef5885648e8753d25f82dc25ff7240f6f0c5f7e698fd0a14461bd" Dec 01 09:54:44 crc kubenswrapper[4933]: E1201 09:54:44.907036 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d66eff247bef5885648e8753d25f82dc25ff7240f6f0c5f7e698fd0a14461bd\": container with ID starting with 9d66eff247bef5885648e8753d25f82dc25ff7240f6f0c5f7e698fd0a14461bd not found: ID does not exist" containerID="9d66eff247bef5885648e8753d25f82dc25ff7240f6f0c5f7e698fd0a14461bd" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.907466 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d66eff247bef5885648e8753d25f82dc25ff7240f6f0c5f7e698fd0a14461bd"} err="failed to get container status \"9d66eff247bef5885648e8753d25f82dc25ff7240f6f0c5f7e698fd0a14461bd\": rpc error: code = NotFound desc = could not find container \"9d66eff247bef5885648e8753d25f82dc25ff7240f6f0c5f7e698fd0a14461bd\": container with ID starting with 9d66eff247bef5885648e8753d25f82dc25ff7240f6f0c5f7e698fd0a14461bd not found: ID does not exist" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.907650 4933 scope.go:117] "RemoveContainer" containerID="198a8e438ab377bad55d3001d83a73b7a40f7286e35ee2f4d5af46d26825eece" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.907552 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.907571 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.908838 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 01 09:54:44 crc kubenswrapper[4933]: E1201 09:54:44.910732 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"198a8e438ab377bad55d3001d83a73b7a40f7286e35ee2f4d5af46d26825eece\": container with ID starting with 198a8e438ab377bad55d3001d83a73b7a40f7286e35ee2f4d5af46d26825eece not found: ID does not exist" containerID="198a8e438ab377bad55d3001d83a73b7a40f7286e35ee2f4d5af46d26825eece" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.910798 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"198a8e438ab377bad55d3001d83a73b7a40f7286e35ee2f4d5af46d26825eece"} err="failed to get container status \"198a8e438ab377bad55d3001d83a73b7a40f7286e35ee2f4d5af46d26825eece\": rpc error: code = NotFound desc = could not find container \"198a8e438ab377bad55d3001d83a73b7a40f7286e35ee2f4d5af46d26825eece\": container with ID starting with 198a8e438ab377bad55d3001d83a73b7a40f7286e35ee2f4d5af46d26825eece not found: ID does not exist" Dec 01 09:54:44 crc kubenswrapper[4933]: I1201 09:54:44.911397 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.069196 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.077736 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-logs\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.077818 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-config-data\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.077876 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsrzq\" (UniqueName: \"kubernetes.io/projected/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-kube-api-access-vsrzq\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.077956 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.078166 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.078467 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.181004 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.181617 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-logs\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.181865 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-config-data\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.182067 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsrzq\" (UniqueName: \"kubernetes.io/projected/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-kube-api-access-vsrzq\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.182383 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.182583 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.188777 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-logs\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.189686 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.189898 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.190081 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.195530 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-config-data\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.208726 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsrzq\" (UniqueName: \"kubernetes.io/projected/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-kube-api-access-vsrzq\") pod \"nova-api-0\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.245330 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.684700 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09aff114-dd0c-43db-8a31-1f1d105ac909" path="/var/lib/kubelet/pods/09aff114-dd0c-43db-8a31-1f1d105ac909/volumes" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.686988 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a99ae55e-670e-451b-8c67-85d537db7077" path="/var/lib/kubelet/pods/a99ae55e-670e-451b-8c67-85d537db7077/volumes" Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.758411 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:45 crc kubenswrapper[4933]: W1201 09:54:45.778721 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3278886_9852_4bd5_96fe_2afd4f1d7eb3.slice/crio-640fbc352e466077cf8662f5319620119ff1f192df01184e74e9b74129d87d23 WatchSource:0}: Error finding container 640fbc352e466077cf8662f5319620119ff1f192df01184e74e9b74129d87d23: Status 404 returned error can't find the container with id 640fbc352e466077cf8662f5319620119ff1f192df01184e74e9b74129d87d23 Dec 01 09:54:45 crc kubenswrapper[4933]: I1201 09:54:45.794733 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77bda02c-44dc-4643-b6d4-4d9f32b260cb","Type":"ContainerStarted","Data":"3b3448c7cff59ad931bcf724335bf266ad69cccdc4b2e0181a5a2cc85ea85f0d"} Dec 01 09:54:46 crc kubenswrapper[4933]: I1201 09:54:46.811104 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77bda02c-44dc-4643-b6d4-4d9f32b260cb","Type":"ContainerStarted","Data":"94cb552ca1e2fe317b177ed6e87f084cc7d39ab292427a6401aa6b12ac321001"} Dec 01 09:54:46 crc kubenswrapper[4933]: I1201 09:54:46.812081 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77bda02c-44dc-4643-b6d4-4d9f32b260cb","Type":"ContainerStarted","Data":"6b9f334126bb70292f6635e1b38fe6dcd7e27fa3030fa28cb3ece8678d1751e9"} Dec 01 09:54:46 crc kubenswrapper[4933]: I1201 09:54:46.815953 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3278886-9852-4bd5-96fe-2afd4f1d7eb3","Type":"ContainerStarted","Data":"0be15f0d7900c5e4ffa2391e458278e4a0637aa8531cf0ca6325ca700b1f395c"} Dec 01 09:54:46 crc kubenswrapper[4933]: I1201 09:54:46.816066 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3278886-9852-4bd5-96fe-2afd4f1d7eb3","Type":"ContainerStarted","Data":"090c68708c3166bb5228aaa2a5586b17e03e5f0eab9f03b7ca4c8151feef0116"} Dec 01 09:54:46 crc kubenswrapper[4933]: I1201 09:54:46.816084 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3278886-9852-4bd5-96fe-2afd4f1d7eb3","Type":"ContainerStarted","Data":"640fbc352e466077cf8662f5319620119ff1f192df01184e74e9b74129d87d23"} Dec 01 09:54:46 crc kubenswrapper[4933]: I1201 09:54:46.844570 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.844522198 podStartE2EDuration="2.844522198s" podCreationTimestamp="2025-12-01 09:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:54:46.838453769 +0000 UTC m=+1377.480177384" watchObservedRunningTime="2025-12-01 09:54:46.844522198 +0000 UTC m=+1377.486245813" Dec 01 09:54:47 crc kubenswrapper[4933]: I1201 09:54:47.828661 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77bda02c-44dc-4643-b6d4-4d9f32b260cb","Type":"ContainerStarted","Data":"c891e26e5fbf127fdc391b0de0772e626dbd114db6ce29637802e2aaf20fe2ce"} Dec 01 09:54:48 crc kubenswrapper[4933]: I1201 09:54:48.358224 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:48 crc kubenswrapper[4933]: I1201 09:54:48.383689 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:48 crc kubenswrapper[4933]: I1201 09:54:48.434067 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:54:48 crc kubenswrapper[4933]: I1201 09:54:48.517350 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-tgfgn"] Dec 01 09:54:48 crc kubenswrapper[4933]: I1201 09:54:48.517594 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" podUID="cff60748-f83c-489e-a7fc-19fb4473f029" containerName="dnsmasq-dns" containerID="cri-o://5d79a727e7b8333af79887f96ba380230eeef8aa55155d605b16c843ae6b1862" gracePeriod=10 Dec 01 09:54:48 crc kubenswrapper[4933]: I1201 09:54:48.853591 4933 generic.go:334] "Generic (PLEG): container finished" podID="cff60748-f83c-489e-a7fc-19fb4473f029" containerID="5d79a727e7b8333af79887f96ba380230eeef8aa55155d605b16c843ae6b1862" exitCode=0 Dec 01 09:54:48 crc kubenswrapper[4933]: I1201 09:54:48.854466 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" event={"ID":"cff60748-f83c-489e-a7fc-19fb4473f029","Type":"ContainerDied","Data":"5d79a727e7b8333af79887f96ba380230eeef8aa55155d605b16c843ae6b1862"} Dec 01 09:54:48 crc kubenswrapper[4933]: I1201 09:54:48.883878 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.170136 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-ksfbz"] Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.172594 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.175781 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.192639 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.205646 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ksfbz"] Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.206679 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.271469 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-config-data\") pod \"nova-cell1-cell-mapping-ksfbz\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.271591 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ksfbz\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.271660 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp2sf\" (UniqueName: \"kubernetes.io/projected/d94caca0-0445-4841-bbaa-2e070afb5051-kube-api-access-hp2sf\") pod \"nova-cell1-cell-mapping-ksfbz\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.271770 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-scripts\") pod \"nova-cell1-cell-mapping-ksfbz\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.375761 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-ovsdbserver-sb\") pod \"cff60748-f83c-489e-a7fc-19fb4473f029\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.375845 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f624\" (UniqueName: \"kubernetes.io/projected/cff60748-f83c-489e-a7fc-19fb4473f029-kube-api-access-6f624\") pod \"cff60748-f83c-489e-a7fc-19fb4473f029\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.375893 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-dns-swift-storage-0\") pod \"cff60748-f83c-489e-a7fc-19fb4473f029\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.376033 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-dns-svc\") pod \"cff60748-f83c-489e-a7fc-19fb4473f029\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.376090 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-config\") pod \"cff60748-f83c-489e-a7fc-19fb4473f029\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.376192 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-ovsdbserver-nb\") pod \"cff60748-f83c-489e-a7fc-19fb4473f029\" (UID: \"cff60748-f83c-489e-a7fc-19fb4473f029\") " Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.376566 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ksfbz\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.376597 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp2sf\" (UniqueName: \"kubernetes.io/projected/d94caca0-0445-4841-bbaa-2e070afb5051-kube-api-access-hp2sf\") pod \"nova-cell1-cell-mapping-ksfbz\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.376784 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-scripts\") pod \"nova-cell1-cell-mapping-ksfbz\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.376875 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-config-data\") pod \"nova-cell1-cell-mapping-ksfbz\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.384081 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-config-data\") pod \"nova-cell1-cell-mapping-ksfbz\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.387752 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff60748-f83c-489e-a7fc-19fb4473f029-kube-api-access-6f624" (OuterVolumeSpecName: "kube-api-access-6f624") pod "cff60748-f83c-489e-a7fc-19fb4473f029" (UID: "cff60748-f83c-489e-a7fc-19fb4473f029"). InnerVolumeSpecName "kube-api-access-6f624". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.387849 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-scripts\") pod \"nova-cell1-cell-mapping-ksfbz\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.394228 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ksfbz\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.397560 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp2sf\" (UniqueName: \"kubernetes.io/projected/d94caca0-0445-4841-bbaa-2e070afb5051-kube-api-access-hp2sf\") pod \"nova-cell1-cell-mapping-ksfbz\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.445295 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-config" (OuterVolumeSpecName: "config") pod "cff60748-f83c-489e-a7fc-19fb4473f029" (UID: "cff60748-f83c-489e-a7fc-19fb4473f029"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.449557 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cff60748-f83c-489e-a7fc-19fb4473f029" (UID: "cff60748-f83c-489e-a7fc-19fb4473f029"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.453034 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cff60748-f83c-489e-a7fc-19fb4473f029" (UID: "cff60748-f83c-489e-a7fc-19fb4473f029"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.453943 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cff60748-f83c-489e-a7fc-19fb4473f029" (UID: "cff60748-f83c-489e-a7fc-19fb4473f029"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.454554 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cff60748-f83c-489e-a7fc-19fb4473f029" (UID: "cff60748-f83c-489e-a7fc-19fb4473f029"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.479894 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.479998 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.480012 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f624\" (UniqueName: \"kubernetes.io/projected/cff60748-f83c-489e-a7fc-19fb4473f029-kube-api-access-6f624\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.480053 4933 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.480064 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.480077 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cff60748-f83c-489e-a7fc-19fb4473f029-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.524498 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.872381 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" event={"ID":"cff60748-f83c-489e-a7fc-19fb4473f029","Type":"ContainerDied","Data":"f57aea152a8bd7475734aebf2fb533467e8822a5fcdec3ac9413e34816b8504d"} Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.872977 4933 scope.go:117] "RemoveContainer" containerID="5d79a727e7b8333af79887f96ba380230eeef8aa55155d605b16c843ae6b1862" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.872911 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-tgfgn" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.880882 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77bda02c-44dc-4643-b6d4-4d9f32b260cb","Type":"ContainerStarted","Data":"81ea0fd2d934e8bb694ba51dda3a4ab91b0c20c14743a1cf108636942b27286f"} Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.933491 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-tgfgn"] Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.934693 4933 scope.go:117] "RemoveContainer" containerID="1ce9a84567668ebd6480024e7f63f8bc56d72d7f1cdf92649d64bda2ada2bfd7" Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.946179 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-tgfgn"] Dec 01 09:54:49 crc kubenswrapper[4933]: I1201 09:54:49.947839 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.442743905 podStartE2EDuration="5.947809303s" podCreationTimestamp="2025-12-01 09:54:44 +0000 UTC" firstStartedPulling="2025-12-01 09:54:45.064911437 +0000 UTC m=+1375.706635052" lastFinishedPulling="2025-12-01 09:54:48.569976825 +0000 UTC m=+1379.211700450" observedRunningTime="2025-12-01 09:54:49.936280661 +0000 UTC m=+1380.578004296" watchObservedRunningTime="2025-12-01 09:54:49.947809303 +0000 UTC m=+1380.589532918" Dec 01 09:54:50 crc kubenswrapper[4933]: I1201 09:54:50.132566 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ksfbz"] Dec 01 09:54:50 crc kubenswrapper[4933]: I1201 09:54:50.923423 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ksfbz" event={"ID":"d94caca0-0445-4841-bbaa-2e070afb5051","Type":"ContainerStarted","Data":"4e386786018e593ebe1e4a7bc445bb8c6ae257aa2ef4957730a36d787f7eee6b"} Dec 01 09:54:50 crc kubenswrapper[4933]: I1201 09:54:50.925559 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 09:54:50 crc kubenswrapper[4933]: I1201 09:54:50.925629 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ksfbz" event={"ID":"d94caca0-0445-4841-bbaa-2e070afb5051","Type":"ContainerStarted","Data":"1e538159c1da5a760159c40740e988fa84ac3df37b9bbe6817a723385808fcb1"} Dec 01 09:54:50 crc kubenswrapper[4933]: I1201 09:54:50.946098 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-ksfbz" podStartSLOduration=1.94606777 podStartE2EDuration="1.94606777s" podCreationTimestamp="2025-12-01 09:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:54:50.945875275 +0000 UTC m=+1381.587598890" watchObservedRunningTime="2025-12-01 09:54:50.94606777 +0000 UTC m=+1381.587791395" Dec 01 09:54:51 crc kubenswrapper[4933]: I1201 09:54:51.683035 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cff60748-f83c-489e-a7fc-19fb4473f029" path="/var/lib/kubelet/pods/cff60748-f83c-489e-a7fc-19fb4473f029/volumes" Dec 01 09:54:55 crc kubenswrapper[4933]: I1201 09:54:55.246939 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:54:55 crc kubenswrapper[4933]: I1201 09:54:55.248709 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:54:56 crc kubenswrapper[4933]: I1201 09:54:56.256495 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3278886-9852-4bd5-96fe-2afd4f1d7eb3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:54:56 crc kubenswrapper[4933]: I1201 09:54:56.256544 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3278886-9852-4bd5-96fe-2afd4f1d7eb3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:54:57 crc kubenswrapper[4933]: I1201 09:54:57.021391 4933 generic.go:334] "Generic (PLEG): container finished" podID="d94caca0-0445-4841-bbaa-2e070afb5051" containerID="4e386786018e593ebe1e4a7bc445bb8c6ae257aa2ef4957730a36d787f7eee6b" exitCode=0 Dec 01 09:54:57 crc kubenswrapper[4933]: I1201 09:54:57.021497 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ksfbz" event={"ID":"d94caca0-0445-4841-bbaa-2e070afb5051","Type":"ContainerDied","Data":"4e386786018e593ebe1e4a7bc445bb8c6ae257aa2ef4957730a36d787f7eee6b"} Dec 01 09:54:58 crc kubenswrapper[4933]: I1201 09:54:58.480508 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:58 crc kubenswrapper[4933]: I1201 09:54:58.622098 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp2sf\" (UniqueName: \"kubernetes.io/projected/d94caca0-0445-4841-bbaa-2e070afb5051-kube-api-access-hp2sf\") pod \"d94caca0-0445-4841-bbaa-2e070afb5051\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " Dec 01 09:54:58 crc kubenswrapper[4933]: I1201 09:54:58.622735 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-combined-ca-bundle\") pod \"d94caca0-0445-4841-bbaa-2e070afb5051\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " Dec 01 09:54:58 crc kubenswrapper[4933]: I1201 09:54:58.622927 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-config-data\") pod \"d94caca0-0445-4841-bbaa-2e070afb5051\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " Dec 01 09:54:58 crc kubenswrapper[4933]: I1201 09:54:58.623052 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-scripts\") pod \"d94caca0-0445-4841-bbaa-2e070afb5051\" (UID: \"d94caca0-0445-4841-bbaa-2e070afb5051\") " Dec 01 09:54:58 crc kubenswrapper[4933]: I1201 09:54:58.631689 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-scripts" (OuterVolumeSpecName: "scripts") pod "d94caca0-0445-4841-bbaa-2e070afb5051" (UID: "d94caca0-0445-4841-bbaa-2e070afb5051"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:58 crc kubenswrapper[4933]: I1201 09:54:58.637363 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d94caca0-0445-4841-bbaa-2e070afb5051-kube-api-access-hp2sf" (OuterVolumeSpecName: "kube-api-access-hp2sf") pod "d94caca0-0445-4841-bbaa-2e070afb5051" (UID: "d94caca0-0445-4841-bbaa-2e070afb5051"). InnerVolumeSpecName "kube-api-access-hp2sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:58 crc kubenswrapper[4933]: I1201 09:54:58.665291 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d94caca0-0445-4841-bbaa-2e070afb5051" (UID: "d94caca0-0445-4841-bbaa-2e070afb5051"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:58 crc kubenswrapper[4933]: I1201 09:54:58.668937 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-config-data" (OuterVolumeSpecName: "config-data") pod "d94caca0-0445-4841-bbaa-2e070afb5051" (UID: "d94caca0-0445-4841-bbaa-2e070afb5051"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:54:58 crc kubenswrapper[4933]: I1201 09:54:58.727166 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:58 crc kubenswrapper[4933]: I1201 09:54:58.727620 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:58 crc kubenswrapper[4933]: I1201 09:54:58.727701 4933 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d94caca0-0445-4841-bbaa-2e070afb5051-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:58 crc kubenswrapper[4933]: I1201 09:54:58.727791 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp2sf\" (UniqueName: \"kubernetes.io/projected/d94caca0-0445-4841-bbaa-2e070afb5051-kube-api-access-hp2sf\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:59 crc kubenswrapper[4933]: I1201 09:54:59.046077 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ksfbz" event={"ID":"d94caca0-0445-4841-bbaa-2e070afb5051","Type":"ContainerDied","Data":"1e538159c1da5a760159c40740e988fa84ac3df37b9bbe6817a723385808fcb1"} Dec 01 09:54:59 crc kubenswrapper[4933]: I1201 09:54:59.046123 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e538159c1da5a760159c40740e988fa84ac3df37b9bbe6817a723385808fcb1" Dec 01 09:54:59 crc kubenswrapper[4933]: I1201 09:54:59.046174 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ksfbz" Dec 01 09:54:59 crc kubenswrapper[4933]: I1201 09:54:59.312697 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:54:59 crc kubenswrapper[4933]: I1201 09:54:59.313042 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b3278886-9852-4bd5-96fe-2afd4f1d7eb3" containerName="nova-api-log" containerID="cri-o://090c68708c3166bb5228aaa2a5586b17e03e5f0eab9f03b7ca4c8151feef0116" gracePeriod=30 Dec 01 09:54:59 crc kubenswrapper[4933]: I1201 09:54:59.313126 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b3278886-9852-4bd5-96fe-2afd4f1d7eb3" containerName="nova-api-api" containerID="cri-o://0be15f0d7900c5e4ffa2391e458278e4a0637aa8531cf0ca6325ca700b1f395c" gracePeriod=30 Dec 01 09:54:59 crc kubenswrapper[4933]: I1201 09:54:59.346444 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:54:59 crc kubenswrapper[4933]: I1201 09:54:59.347028 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7a6c6446-7110-4007-9eed-f99541546756" containerName="nova-scheduler-scheduler" containerID="cri-o://171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31" gracePeriod=30 Dec 01 09:54:59 crc kubenswrapper[4933]: I1201 09:54:59.371149 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:54:59 crc kubenswrapper[4933]: I1201 09:54:59.375960 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerName="nova-metadata-metadata" containerID="cri-o://aa7fd287be4c94b09f11e1e73c1ab00b689e88b568b2c1812c2512fbdf25eafa" gracePeriod=30 Dec 01 09:54:59 crc kubenswrapper[4933]: I1201 09:54:59.376420 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerName="nova-metadata-log" containerID="cri-o://a87eea3a7fab63c68f661c8706f492398ea113b02854e0cd9555c7eb9a490fb0" gracePeriod=30 Dec 01 09:54:59 crc kubenswrapper[4933]: E1201 09:54:59.725424 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 09:54:59 crc kubenswrapper[4933]: E1201 09:54:59.728345 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 09:54:59 crc kubenswrapper[4933]: E1201 09:54:59.730632 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 09:54:59 crc kubenswrapper[4933]: E1201 09:54:59.730757 4933 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7a6c6446-7110-4007-9eed-f99541546756" containerName="nova-scheduler-scheduler" Dec 01 09:55:00 crc kubenswrapper[4933]: I1201 09:55:00.060655 4933 generic.go:334] "Generic (PLEG): container finished" podID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerID="a87eea3a7fab63c68f661c8706f492398ea113b02854e0cd9555c7eb9a490fb0" exitCode=143 Dec 01 09:55:00 crc kubenswrapper[4933]: I1201 09:55:00.060747 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b42cd5bd-bb2b-430b-a321-8da42af87665","Type":"ContainerDied","Data":"a87eea3a7fab63c68f661c8706f492398ea113b02854e0cd9555c7eb9a490fb0"} Dec 01 09:55:00 crc kubenswrapper[4933]: I1201 09:55:00.064169 4933 generic.go:334] "Generic (PLEG): container finished" podID="b3278886-9852-4bd5-96fe-2afd4f1d7eb3" containerID="090c68708c3166bb5228aaa2a5586b17e03e5f0eab9f03b7ca4c8151feef0116" exitCode=143 Dec 01 09:55:00 crc kubenswrapper[4933]: I1201 09:55:00.064222 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3278886-9852-4bd5-96fe-2afd4f1d7eb3","Type":"ContainerDied","Data":"090c68708c3166bb5228aaa2a5586b17e03e5f0eab9f03b7ca4c8151feef0116"} Dec 01 09:55:02 crc kubenswrapper[4933]: I1201 09:55:02.789083 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": read tcp 10.217.0.2:55388->10.217.0.191:8775: read: connection reset by peer" Dec 01 09:55:02 crc kubenswrapper[4933]: I1201 09:55:02.789149 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": read tcp 10.217.0.2:55390->10.217.0.191:8775: read: connection reset by peer" Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.107113 4933 generic.go:334] "Generic (PLEG): container finished" podID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerID="aa7fd287be4c94b09f11e1e73c1ab00b689e88b568b2c1812c2512fbdf25eafa" exitCode=0 Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.107196 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b42cd5bd-bb2b-430b-a321-8da42af87665","Type":"ContainerDied","Data":"aa7fd287be4c94b09f11e1e73c1ab00b689e88b568b2c1812c2512fbdf25eafa"} Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.330464 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.449598 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b42cd5bd-bb2b-430b-a321-8da42af87665-logs\") pod \"b42cd5bd-bb2b-430b-a321-8da42af87665\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.449814 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf8j2\" (UniqueName: \"kubernetes.io/projected/b42cd5bd-bb2b-430b-a321-8da42af87665-kube-api-access-kf8j2\") pod \"b42cd5bd-bb2b-430b-a321-8da42af87665\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.449928 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-config-data\") pod \"b42cd5bd-bb2b-430b-a321-8da42af87665\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.449978 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-nova-metadata-tls-certs\") pod \"b42cd5bd-bb2b-430b-a321-8da42af87665\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.450062 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-combined-ca-bundle\") pod \"b42cd5bd-bb2b-430b-a321-8da42af87665\" (UID: \"b42cd5bd-bb2b-430b-a321-8da42af87665\") " Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.450169 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b42cd5bd-bb2b-430b-a321-8da42af87665-logs" (OuterVolumeSpecName: "logs") pod "b42cd5bd-bb2b-430b-a321-8da42af87665" (UID: "b42cd5bd-bb2b-430b-a321-8da42af87665"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.450783 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b42cd5bd-bb2b-430b-a321-8da42af87665-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.458381 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b42cd5bd-bb2b-430b-a321-8da42af87665-kube-api-access-kf8j2" (OuterVolumeSpecName: "kube-api-access-kf8j2") pod "b42cd5bd-bb2b-430b-a321-8da42af87665" (UID: "b42cd5bd-bb2b-430b-a321-8da42af87665"). InnerVolumeSpecName "kube-api-access-kf8j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.489356 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-config-data" (OuterVolumeSpecName: "config-data") pod "b42cd5bd-bb2b-430b-a321-8da42af87665" (UID: "b42cd5bd-bb2b-430b-a321-8da42af87665"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.504434 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b42cd5bd-bb2b-430b-a321-8da42af87665" (UID: "b42cd5bd-bb2b-430b-a321-8da42af87665"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.550056 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b42cd5bd-bb2b-430b-a321-8da42af87665" (UID: "b42cd5bd-bb2b-430b-a321-8da42af87665"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.552957 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf8j2\" (UniqueName: \"kubernetes.io/projected/b42cd5bd-bb2b-430b-a321-8da42af87665-kube-api-access-kf8j2\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.553017 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.553031 4933 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:03 crc kubenswrapper[4933]: I1201 09:55:03.553055 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42cd5bd-bb2b-430b-a321-8da42af87665-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.119818 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b42cd5bd-bb2b-430b-a321-8da42af87665","Type":"ContainerDied","Data":"6b60c116c88c205569ff4a28f94ffa8e870dd998dd01924c12999e8f21a04246"} Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.119897 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.121530 4933 scope.go:117] "RemoveContainer" containerID="aa7fd287be4c94b09f11e1e73c1ab00b689e88b568b2c1812c2512fbdf25eafa" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.124722 4933 generic.go:334] "Generic (PLEG): container finished" podID="7a6c6446-7110-4007-9eed-f99541546756" containerID="171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31" exitCode=0 Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.124800 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7a6c6446-7110-4007-9eed-f99541546756","Type":"ContainerDied","Data":"171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31"} Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.128415 4933 generic.go:334] "Generic (PLEG): container finished" podID="b3278886-9852-4bd5-96fe-2afd4f1d7eb3" containerID="0be15f0d7900c5e4ffa2391e458278e4a0637aa8531cf0ca6325ca700b1f395c" exitCode=0 Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.128480 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3278886-9852-4bd5-96fe-2afd4f1d7eb3","Type":"ContainerDied","Data":"0be15f0d7900c5e4ffa2391e458278e4a0637aa8531cf0ca6325ca700b1f395c"} Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.151554 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.154118 4933 scope.go:117] "RemoveContainer" containerID="a87eea3a7fab63c68f661c8706f492398ea113b02854e0cd9555c7eb9a490fb0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.164407 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.181887 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:55:04 crc kubenswrapper[4933]: E1201 09:55:04.187438 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d94caca0-0445-4841-bbaa-2e070afb5051" containerName="nova-manage" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.187501 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94caca0-0445-4841-bbaa-2e070afb5051" containerName="nova-manage" Dec 01 09:55:04 crc kubenswrapper[4933]: E1201 09:55:04.187531 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerName="nova-metadata-metadata" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.187541 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerName="nova-metadata-metadata" Dec 01 09:55:04 crc kubenswrapper[4933]: E1201 09:55:04.187565 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff60748-f83c-489e-a7fc-19fb4473f029" containerName="init" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.187576 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff60748-f83c-489e-a7fc-19fb4473f029" containerName="init" Dec 01 09:55:04 crc kubenswrapper[4933]: E1201 09:55:04.187605 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerName="nova-metadata-log" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.187617 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerName="nova-metadata-log" Dec 01 09:55:04 crc kubenswrapper[4933]: E1201 09:55:04.187646 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff60748-f83c-489e-a7fc-19fb4473f029" containerName="dnsmasq-dns" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.187655 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff60748-f83c-489e-a7fc-19fb4473f029" containerName="dnsmasq-dns" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.188061 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d94caca0-0445-4841-bbaa-2e070afb5051" containerName="nova-manage" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.188087 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerName="nova-metadata-metadata" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.188097 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="b42cd5bd-bb2b-430b-a321-8da42af87665" containerName="nova-metadata-log" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.188111 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="cff60748-f83c-489e-a7fc-19fb4473f029" containerName="dnsmasq-dns" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.189461 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.198283 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.199034 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.199346 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.272398 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-config-data\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.272476 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.272533 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-logs\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.272570 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.272642 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zcfd\" (UniqueName: \"kubernetes.io/projected/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-kube-api-access-9zcfd\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.375673 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zcfd\" (UniqueName: \"kubernetes.io/projected/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-kube-api-access-9zcfd\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.375919 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-config-data\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.376959 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.377071 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-logs\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.377123 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.377630 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-logs\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.384784 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.384884 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.384940 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-config-data\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.398567 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zcfd\" (UniqueName: \"kubernetes.io/projected/55378084-cbcf-4c0c-8bdc-c9d2f026ca3c-kube-api-access-9zcfd\") pod \"nova-metadata-0\" (UID: \"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c\") " pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.526953 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:55:04 crc kubenswrapper[4933]: E1201 09:55:04.723113 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31 is running failed: container process not found" containerID="171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 09:55:04 crc kubenswrapper[4933]: E1201 09:55:04.724011 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31 is running failed: container process not found" containerID="171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 09:55:04 crc kubenswrapper[4933]: E1201 09:55:04.726577 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31 is running failed: container process not found" containerID="171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 09:55:04 crc kubenswrapper[4933]: E1201 09:55:04.726648 4933 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7a6c6446-7110-4007-9eed-f99541546756" containerName="nova-scheduler-scheduler" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.748489 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.790067 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-combined-ca-bundle\") pod \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.790242 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-logs\") pod \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.790295 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsrzq\" (UniqueName: \"kubernetes.io/projected/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-kube-api-access-vsrzq\") pod \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.790345 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-internal-tls-certs\") pod \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.790469 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-public-tls-certs\") pod \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.790533 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-config-data\") pod \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\" (UID: \"b3278886-9852-4bd5-96fe-2afd4f1d7eb3\") " Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.792180 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-logs" (OuterVolumeSpecName: "logs") pod "b3278886-9852-4bd5-96fe-2afd4f1d7eb3" (UID: "b3278886-9852-4bd5-96fe-2afd4f1d7eb3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.801281 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-kube-api-access-vsrzq" (OuterVolumeSpecName: "kube-api-access-vsrzq") pod "b3278886-9852-4bd5-96fe-2afd4f1d7eb3" (UID: "b3278886-9852-4bd5-96fe-2afd4f1d7eb3"). InnerVolumeSpecName "kube-api-access-vsrzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.817048 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.826829 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-config-data" (OuterVolumeSpecName: "config-data") pod "b3278886-9852-4bd5-96fe-2afd4f1d7eb3" (UID: "b3278886-9852-4bd5-96fe-2afd4f1d7eb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.851252 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3278886-9852-4bd5-96fe-2afd4f1d7eb3" (UID: "b3278886-9852-4bd5-96fe-2afd4f1d7eb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.873205 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b3278886-9852-4bd5-96fe-2afd4f1d7eb3" (UID: "b3278886-9852-4bd5-96fe-2afd4f1d7eb3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.893033 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6c6446-7110-4007-9eed-f99541546756-combined-ca-bundle\") pod \"7a6c6446-7110-4007-9eed-f99541546756\" (UID: \"7a6c6446-7110-4007-9eed-f99541546756\") " Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.893659 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6c6446-7110-4007-9eed-f99541546756-config-data\") pod \"7a6c6446-7110-4007-9eed-f99541546756\" (UID: \"7a6c6446-7110-4007-9eed-f99541546756\") " Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.893810 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9djqz\" (UniqueName: \"kubernetes.io/projected/7a6c6446-7110-4007-9eed-f99541546756-kube-api-access-9djqz\") pod \"7a6c6446-7110-4007-9eed-f99541546756\" (UID: \"7a6c6446-7110-4007-9eed-f99541546756\") " Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.894869 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.895488 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.895531 4933 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.895548 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsrzq\" (UniqueName: \"kubernetes.io/projected/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-kube-api-access-vsrzq\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.895564 4933 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.901385 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a6c6446-7110-4007-9eed-f99541546756-kube-api-access-9djqz" (OuterVolumeSpecName: "kube-api-access-9djqz") pod "7a6c6446-7110-4007-9eed-f99541546756" (UID: "7a6c6446-7110-4007-9eed-f99541546756"). InnerVolumeSpecName "kube-api-access-9djqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.903432 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b3278886-9852-4bd5-96fe-2afd4f1d7eb3" (UID: "b3278886-9852-4bd5-96fe-2afd4f1d7eb3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.925985 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a6c6446-7110-4007-9eed-f99541546756-config-data" (OuterVolumeSpecName: "config-data") pod "7a6c6446-7110-4007-9eed-f99541546756" (UID: "7a6c6446-7110-4007-9eed-f99541546756"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.933812 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a6c6446-7110-4007-9eed-f99541546756-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a6c6446-7110-4007-9eed-f99541546756" (UID: "7a6c6446-7110-4007-9eed-f99541546756"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.997975 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6c6446-7110-4007-9eed-f99541546756-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.998106 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9djqz\" (UniqueName: \"kubernetes.io/projected/7a6c6446-7110-4007-9eed-f99541546756-kube-api-access-9djqz\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.998125 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6c6446-7110-4007-9eed-f99541546756-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:04 crc kubenswrapper[4933]: I1201 09:55:04.998139 4933 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3278886-9852-4bd5-96fe-2afd4f1d7eb3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.149721 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7a6c6446-7110-4007-9eed-f99541546756","Type":"ContainerDied","Data":"af7e91f04aa6a2f612f2aea2f97d6486e56d1f6a8592e07e0ea0c5ab10fd04a1"} Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.149814 4933 scope.go:117] "RemoveContainer" containerID="171a30ec01d50180ff24084f713b7a857ea970201aa4ed9c78b13b36f1f3cc31" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.149996 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.155030 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.159691 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3278886-9852-4bd5-96fe-2afd4f1d7eb3","Type":"ContainerDied","Data":"640fbc352e466077cf8662f5319620119ff1f192df01184e74e9b74129d87d23"} Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.159745 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: W1201 09:55:05.165902 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55378084_cbcf_4c0c_8bdc_c9d2f026ca3c.slice/crio-39d2a71a973f48f15642c42724d40296d68a97f8ef8fdb87b5dda0279b7caf42 WatchSource:0}: Error finding container 39d2a71a973f48f15642c42724d40296d68a97f8ef8fdb87b5dda0279b7caf42: Status 404 returned error can't find the container with id 39d2a71a973f48f15642c42724d40296d68a97f8ef8fdb87b5dda0279b7caf42 Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.273116 4933 scope.go:117] "RemoveContainer" containerID="0be15f0d7900c5e4ffa2391e458278e4a0637aa8531cf0ca6325ca700b1f395c" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.332541 4933 scope.go:117] "RemoveContainer" containerID="090c68708c3166bb5228aaa2a5586b17e03e5f0eab9f03b7ca4c8151feef0116" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.337901 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.363210 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.391907 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.414422 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.425444 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:55:05 crc kubenswrapper[4933]: E1201 09:55:05.426348 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3278886-9852-4bd5-96fe-2afd4f1d7eb3" containerName="nova-api-log" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.426380 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3278886-9852-4bd5-96fe-2afd4f1d7eb3" containerName="nova-api-log" Dec 01 09:55:05 crc kubenswrapper[4933]: E1201 09:55:05.426447 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a6c6446-7110-4007-9eed-f99541546756" containerName="nova-scheduler-scheduler" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.426459 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6c6446-7110-4007-9eed-f99541546756" containerName="nova-scheduler-scheduler" Dec 01 09:55:05 crc kubenswrapper[4933]: E1201 09:55:05.426478 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3278886-9852-4bd5-96fe-2afd4f1d7eb3" containerName="nova-api-api" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.426487 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3278886-9852-4bd5-96fe-2afd4f1d7eb3" containerName="nova-api-api" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.426750 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3278886-9852-4bd5-96fe-2afd4f1d7eb3" containerName="nova-api-api" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.426785 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a6c6446-7110-4007-9eed-f99541546756" containerName="nova-scheduler-scheduler" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.426801 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3278886-9852-4bd5-96fe-2afd4f1d7eb3" containerName="nova-api-log" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.427942 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.431847 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.435962 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.439023 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.442104 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.442408 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.443367 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.453210 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.465364 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.517133 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dvsg\" (UniqueName: \"kubernetes.io/projected/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-kube-api-access-6dvsg\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.517192 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0283919c-d007-4102-a7dd-33bd1388971c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0283919c-d007-4102-a7dd-33bd1388971c\") " pod="openstack/nova-scheduler-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.517259 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0283919c-d007-4102-a7dd-33bd1388971c-config-data\") pod \"nova-scheduler-0\" (UID: \"0283919c-d007-4102-a7dd-33bd1388971c\") " pod="openstack/nova-scheduler-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.517283 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-public-tls-certs\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.517376 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.517404 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.517442 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-logs\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.517506 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnlhb\" (UniqueName: \"kubernetes.io/projected/0283919c-d007-4102-a7dd-33bd1388971c-kube-api-access-tnlhb\") pod \"nova-scheduler-0\" (UID: \"0283919c-d007-4102-a7dd-33bd1388971c\") " pod="openstack/nova-scheduler-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.517700 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-config-data\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.620623 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnlhb\" (UniqueName: \"kubernetes.io/projected/0283919c-d007-4102-a7dd-33bd1388971c-kube-api-access-tnlhb\") pod \"nova-scheduler-0\" (UID: \"0283919c-d007-4102-a7dd-33bd1388971c\") " pod="openstack/nova-scheduler-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.620709 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-config-data\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.620819 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dvsg\" (UniqueName: \"kubernetes.io/projected/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-kube-api-access-6dvsg\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.620857 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0283919c-d007-4102-a7dd-33bd1388971c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0283919c-d007-4102-a7dd-33bd1388971c\") " pod="openstack/nova-scheduler-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.620950 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0283919c-d007-4102-a7dd-33bd1388971c-config-data\") pod \"nova-scheduler-0\" (UID: \"0283919c-d007-4102-a7dd-33bd1388971c\") " pod="openstack/nova-scheduler-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.620972 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-public-tls-certs\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.621024 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.621062 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.621106 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-logs\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.621649 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-logs\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.628402 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-public-tls-certs\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.629216 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.631792 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-config-data\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.632295 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0283919c-d007-4102-a7dd-33bd1388971c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0283919c-d007-4102-a7dd-33bd1388971c\") " pod="openstack/nova-scheduler-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.632447 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.633508 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0283919c-d007-4102-a7dd-33bd1388971c-config-data\") pod \"nova-scheduler-0\" (UID: \"0283919c-d007-4102-a7dd-33bd1388971c\") " pod="openstack/nova-scheduler-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.645116 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnlhb\" (UniqueName: \"kubernetes.io/projected/0283919c-d007-4102-a7dd-33bd1388971c-kube-api-access-tnlhb\") pod \"nova-scheduler-0\" (UID: \"0283919c-d007-4102-a7dd-33bd1388971c\") " pod="openstack/nova-scheduler-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.645254 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dvsg\" (UniqueName: \"kubernetes.io/projected/b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265-kube-api-access-6dvsg\") pod \"nova-api-0\" (UID: \"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265\") " pod="openstack/nova-api-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.686010 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a6c6446-7110-4007-9eed-f99541546756" path="/var/lib/kubelet/pods/7a6c6446-7110-4007-9eed-f99541546756/volumes" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.688503 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3278886-9852-4bd5-96fe-2afd4f1d7eb3" path="/var/lib/kubelet/pods/b3278886-9852-4bd5-96fe-2afd4f1d7eb3/volumes" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.689896 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b42cd5bd-bb2b-430b-a321-8da42af87665" path="/var/lib/kubelet/pods/b42cd5bd-bb2b-430b-a321-8da42af87665/volumes" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.761957 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:55:05 crc kubenswrapper[4933]: I1201 09:55:05.839677 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:55:06 crc kubenswrapper[4933]: I1201 09:55:06.177065 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c","Type":"ContainerStarted","Data":"507ba4a4a960acf3ca16cb2493a1465705f6684a25b16d62c68f5e1738ffacb2"} Dec 01 09:55:06 crc kubenswrapper[4933]: I1201 09:55:06.177635 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c","Type":"ContainerStarted","Data":"3b057ea461ddfd808a0a9a02d10851831e32449e4d72e1dd9ca0ae6d4ae695e4"} Dec 01 09:55:06 crc kubenswrapper[4933]: I1201 09:55:06.177660 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"55378084-cbcf-4c0c-8bdc-c9d2f026ca3c","Type":"ContainerStarted","Data":"39d2a71a973f48f15642c42724d40296d68a97f8ef8fdb87b5dda0279b7caf42"} Dec 01 09:55:06 crc kubenswrapper[4933]: I1201 09:55:06.205728 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.205691833 podStartE2EDuration="2.205691833s" podCreationTimestamp="2025-12-01 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:55:06.202346561 +0000 UTC m=+1396.844070186" watchObservedRunningTime="2025-12-01 09:55:06.205691833 +0000 UTC m=+1396.847415448" Dec 01 09:55:06 crc kubenswrapper[4933]: W1201 09:55:06.290911 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0283919c_d007_4102_a7dd_33bd1388971c.slice/crio-64ee55540b47ecdff828155ed1001886c948c43f79f51feef3da0e8f683251f9 WatchSource:0}: Error finding container 64ee55540b47ecdff828155ed1001886c948c43f79f51feef3da0e8f683251f9: Status 404 returned error can't find the container with id 64ee55540b47ecdff828155ed1001886c948c43f79f51feef3da0e8f683251f9 Dec 01 09:55:06 crc kubenswrapper[4933]: I1201 09:55:06.291241 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:55:06 crc kubenswrapper[4933]: I1201 09:55:06.430319 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:55:06 crc kubenswrapper[4933]: W1201 09:55:06.432175 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8e55a9c_44f6_4b6b_94a4_9d9d4b50f265.slice/crio-f718418eb7734193416699ed4685f6324a691b76cea9b3614b506aa0b4ab34b7 WatchSource:0}: Error finding container f718418eb7734193416699ed4685f6324a691b76cea9b3614b506aa0b4ab34b7: Status 404 returned error can't find the container with id f718418eb7734193416699ed4685f6324a691b76cea9b3614b506aa0b4ab34b7 Dec 01 09:55:07 crc kubenswrapper[4933]: I1201 09:55:07.193783 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0283919c-d007-4102-a7dd-33bd1388971c","Type":"ContainerStarted","Data":"ad63ef49c3f462bb730c1e09bc35861836b7834c127a8301d1b16aaa6b3e8a0f"} Dec 01 09:55:07 crc kubenswrapper[4933]: I1201 09:55:07.194477 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0283919c-d007-4102-a7dd-33bd1388971c","Type":"ContainerStarted","Data":"64ee55540b47ecdff828155ed1001886c948c43f79f51feef3da0e8f683251f9"} Dec 01 09:55:07 crc kubenswrapper[4933]: I1201 09:55:07.199078 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265","Type":"ContainerStarted","Data":"55ece99884713e5a47d6a1d195d58f443c812db9cdd2ece87fd158cf04a19245"} Dec 01 09:55:07 crc kubenswrapper[4933]: I1201 09:55:07.199145 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265","Type":"ContainerStarted","Data":"1f511c814828f92334ed210cfed306994164a588490ecaeb286831ea4674dfcc"} Dec 01 09:55:07 crc kubenswrapper[4933]: I1201 09:55:07.199156 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265","Type":"ContainerStarted","Data":"f718418eb7734193416699ed4685f6324a691b76cea9b3614b506aa0b4ab34b7"} Dec 01 09:55:07 crc kubenswrapper[4933]: I1201 09:55:07.221663 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.221638612 podStartE2EDuration="2.221638612s" podCreationTimestamp="2025-12-01 09:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:55:07.214906588 +0000 UTC m=+1397.856630223" watchObservedRunningTime="2025-12-01 09:55:07.221638612 +0000 UTC m=+1397.863362227" Dec 01 09:55:07 crc kubenswrapper[4933]: I1201 09:55:07.242544 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.242514912 podStartE2EDuration="2.242514912s" podCreationTimestamp="2025-12-01 09:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:55:07.240433452 +0000 UTC m=+1397.882157097" watchObservedRunningTime="2025-12-01 09:55:07.242514912 +0000 UTC m=+1397.884238527" Dec 01 09:55:09 crc kubenswrapper[4933]: I1201 09:55:09.528106 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:55:09 crc kubenswrapper[4933]: I1201 09:55:09.528728 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:55:10 crc kubenswrapper[4933]: I1201 09:55:10.762593 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 01 09:55:11 crc kubenswrapper[4933]: I1201 09:55:11.741416 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:55:11 crc kubenswrapper[4933]: I1201 09:55:11.741513 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:55:14 crc kubenswrapper[4933]: I1201 09:55:14.482271 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 01 09:55:14 crc kubenswrapper[4933]: I1201 09:55:14.528107 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 09:55:14 crc kubenswrapper[4933]: I1201 09:55:14.530960 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 09:55:15 crc kubenswrapper[4933]: I1201 09:55:15.543812 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="55378084-cbcf-4c0c-8bdc-c9d2f026ca3c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:55:15 crc kubenswrapper[4933]: I1201 09:55:15.543893 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="55378084-cbcf-4c0c-8bdc-c9d2f026ca3c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:55:15 crc kubenswrapper[4933]: I1201 09:55:15.762393 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 01 09:55:15 crc kubenswrapper[4933]: I1201 09:55:15.799221 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 01 09:55:15 crc kubenswrapper[4933]: I1201 09:55:15.841174 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:55:15 crc kubenswrapper[4933]: I1201 09:55:15.842809 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:55:16 crc kubenswrapper[4933]: I1201 09:55:16.359757 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 01 09:55:16 crc kubenswrapper[4933]: I1201 09:55:16.854564 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:55:16 crc kubenswrapper[4933]: I1201 09:55:16.854638 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:55:24 crc kubenswrapper[4933]: I1201 09:55:24.534585 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 09:55:24 crc kubenswrapper[4933]: I1201 09:55:24.541171 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 09:55:24 crc kubenswrapper[4933]: I1201 09:55:24.549102 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 09:55:25 crc kubenswrapper[4933]: I1201 09:55:25.429043 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 09:55:25 crc kubenswrapper[4933]: I1201 09:55:25.848276 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 09:55:25 crc kubenswrapper[4933]: I1201 09:55:25.849405 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 09:55:25 crc kubenswrapper[4933]: I1201 09:55:25.849521 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 09:55:25 crc kubenswrapper[4933]: I1201 09:55:25.854199 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 09:55:26 crc kubenswrapper[4933]: I1201 09:55:26.438871 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 09:55:26 crc kubenswrapper[4933]: I1201 09:55:26.446382 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 09:55:34 crc kubenswrapper[4933]: I1201 09:55:34.833910 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:55:36 crc kubenswrapper[4933]: I1201 09:55:36.014140 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:55:39 crc kubenswrapper[4933]: I1201 09:55:39.945719 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" containerName="rabbitmq" containerID="cri-o://fa751b0aa63209b66f2cc4715529e50651a08c1e4cf44398f562d58938f15044" gracePeriod=604795 Dec 01 09:55:41 crc kubenswrapper[4933]: I1201 09:55:41.281469 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b8f90456-f375-447c-8f32-8ca629a28861" containerName="rabbitmq" containerID="cri-o://623eb7fa1a62caea1200abf885cf68400135d41dca63d2762217dce664ca47fd" gracePeriod=604795 Dec 01 09:55:41 crc kubenswrapper[4933]: I1201 09:55:41.740670 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:55:41 crc kubenswrapper[4933]: I1201 09:55:41.741176 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:55:46 crc kubenswrapper[4933]: I1201 09:55:46.373596 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b8f90456-f375-447c-8f32-8ca629a28861" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Dec 01 09:55:46 crc kubenswrapper[4933]: I1201 09:55:46.695512 4933 generic.go:334] "Generic (PLEG): container finished" podID="3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" containerID="fa751b0aa63209b66f2cc4715529e50651a08c1e4cf44398f562d58938f15044" exitCode=0 Dec 01 09:55:46 crc kubenswrapper[4933]: I1201 09:55:46.695582 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec","Type":"ContainerDied","Data":"fa751b0aa63209b66f2cc4715529e50651a08c1e4cf44398f562d58938f15044"} Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.144688 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.314027 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-plugins\") pod \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.314105 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmcdh\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-kube-api-access-tmcdh\") pod \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.314225 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-erlang-cookie\") pod \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.314968 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" (UID: "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.315096 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" (UID: "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.315183 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-server-conf\") pod \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.315567 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-confd\") pod \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.315621 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.315671 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-pod-info\") pod \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.315717 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-config-data\") pod \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.315787 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-tls\") pod \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.315820 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-plugins-conf\") pod \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.315879 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-erlang-cookie-secret\") pod \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\" (UID: \"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec\") " Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.316832 4933 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.316860 4933 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.316982 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" (UID: "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.323254 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-pod-info" (OuterVolumeSpecName: "pod-info") pod "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" (UID: "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.323427 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-kube-api-access-tmcdh" (OuterVolumeSpecName: "kube-api-access-tmcdh") pod "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" (UID: "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec"). InnerVolumeSpecName "kube-api-access-tmcdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.332902 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" (UID: "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.358591 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" (UID: "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.371524 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" (UID: "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.420618 4933 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.420659 4933 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.420671 4933 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.420681 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmcdh\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-kube-api-access-tmcdh\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.420710 4933 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.420720 4933 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-pod-info\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.445047 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-config-data" (OuterVolumeSpecName: "config-data") pod "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" (UID: "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.493975 4933 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.609178 4933 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.609209 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.642753 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-server-conf" (OuterVolumeSpecName: "server-conf") pod "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" (UID: "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.716625 4933 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-server-conf\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.733427 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" (UID: "3d9a36ba-b2c3-4f85-96d6-608d8e9749ec"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.734818 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.734876 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d9a36ba-b2c3-4f85-96d6-608d8e9749ec","Type":"ContainerDied","Data":"752c53b2a372f6ed1ebf0a780319dc9129efb939adb3423f80370693eb6b7cc9"} Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.735082 4933 scope.go:117] "RemoveContainer" containerID="fa751b0aa63209b66f2cc4715529e50651a08c1e4cf44398f562d58938f15044" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.737675 4933 generic.go:334] "Generic (PLEG): container finished" podID="b8f90456-f375-447c-8f32-8ca629a28861" containerID="623eb7fa1a62caea1200abf885cf68400135d41dca63d2762217dce664ca47fd" exitCode=0 Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.737705 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8f90456-f375-447c-8f32-8ca629a28861","Type":"ContainerDied","Data":"623eb7fa1a62caea1200abf885cf68400135d41dca63d2762217dce664ca47fd"} Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.780903 4933 scope.go:117] "RemoveContainer" containerID="50e61c5cd567cfe70fd9d90579b11db9d8c588d75c47667676368152554b647e" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.796590 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.809398 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.829161 4933 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.839437 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:55:47 crc kubenswrapper[4933]: E1201 09:55:47.845954 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" containerName="setup-container" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.846009 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" containerName="setup-container" Dec 01 09:55:47 crc kubenswrapper[4933]: E1201 09:55:47.846043 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" containerName="rabbitmq" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.846049 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" containerName="rabbitmq" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.846407 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" containerName="rabbitmq" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.853991 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.856958 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.858510 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.858713 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.858927 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.860328 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.860353 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6g7jz" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.860420 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 01 09:55:47 crc kubenswrapper[4933]: I1201 09:55:47.875072 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.026247 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.037588 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.037674 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.037697 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.037756 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.037823 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94wn7\" (UniqueName: \"kubernetes.io/projected/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-kube-api-access-94wn7\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.037845 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.037867 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.037927 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.038025 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.038092 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-config-data\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.038123 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.140696 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"b8f90456-f375-447c-8f32-8ca629a28861\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.140783 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79bgq\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-kube-api-access-79bgq\") pod \"b8f90456-f375-447c-8f32-8ca629a28861\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.140835 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-plugins\") pod \"b8f90456-f375-447c-8f32-8ca629a28861\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.140987 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-erlang-cookie\") pod \"b8f90456-f375-447c-8f32-8ca629a28861\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.141054 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-plugins-conf\") pod \"b8f90456-f375-447c-8f32-8ca629a28861\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.141286 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-confd\") pod \"b8f90456-f375-447c-8f32-8ca629a28861\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.141349 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-tls\") pod \"b8f90456-f375-447c-8f32-8ca629a28861\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.141371 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8f90456-f375-447c-8f32-8ca629a28861-pod-info\") pod \"b8f90456-f375-447c-8f32-8ca629a28861\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.141425 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-server-conf\") pod \"b8f90456-f375-447c-8f32-8ca629a28861\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.141459 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-config-data\") pod \"b8f90456-f375-447c-8f32-8ca629a28861\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.141501 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8f90456-f375-447c-8f32-8ca629a28861-erlang-cookie-secret\") pod \"b8f90456-f375-447c-8f32-8ca629a28861\" (UID: \"b8f90456-f375-447c-8f32-8ca629a28861\") " Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.141931 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.141978 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.142001 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.142048 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94wn7\" (UniqueName: \"kubernetes.io/projected/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-kube-api-access-94wn7\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.142065 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.142085 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.142126 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.142148 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.142208 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-config-data\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.142236 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.142848 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.141933 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b8f90456-f375-447c-8f32-8ca629a28861" (UID: "b8f90456-f375-447c-8f32-8ca629a28861"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.143352 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b8f90456-f375-447c-8f32-8ca629a28861" (UID: "b8f90456-f375-447c-8f32-8ca629a28861"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.144193 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.145809 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b8f90456-f375-447c-8f32-8ca629a28861" (UID: "b8f90456-f375-447c-8f32-8ca629a28861"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.148327 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-config-data\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.149778 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "b8f90456-f375-447c-8f32-8ca629a28861" (UID: "b8f90456-f375-447c-8f32-8ca629a28861"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.150106 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.152052 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.152154 4933 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.152170 4933 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.152236 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.152570 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.153136 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.154400 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.154735 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b8f90456-f375-447c-8f32-8ca629a28861" (UID: "b8f90456-f375-447c-8f32-8ca629a28861"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.155125 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-kube-api-access-79bgq" (OuterVolumeSpecName: "kube-api-access-79bgq") pod "b8f90456-f375-447c-8f32-8ca629a28861" (UID: "b8f90456-f375-447c-8f32-8ca629a28861"). InnerVolumeSpecName "kube-api-access-79bgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.156298 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f90456-f375-447c-8f32-8ca629a28861-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b8f90456-f375-447c-8f32-8ca629a28861" (UID: "b8f90456-f375-447c-8f32-8ca629a28861"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.159092 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.165234 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b8f90456-f375-447c-8f32-8ca629a28861-pod-info" (OuterVolumeSpecName: "pod-info") pod "b8f90456-f375-447c-8f32-8ca629a28861" (UID: "b8f90456-f375-447c-8f32-8ca629a28861"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.165863 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.171832 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94wn7\" (UniqueName: \"kubernetes.io/projected/4b205db3-c812-4f4e-a81c-3662f2ca0cf1-kube-api-access-94wn7\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.228754 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-config-data" (OuterVolumeSpecName: "config-data") pod "b8f90456-f375-447c-8f32-8ca629a28861" (UID: "b8f90456-f375-447c-8f32-8ca629a28861"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.243219 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"4b205db3-c812-4f4e-a81c-3662f2ca0cf1\") " pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.248740 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-server-conf" (OuterVolumeSpecName: "server-conf") pod "b8f90456-f375-447c-8f32-8ca629a28861" (UID: "b8f90456-f375-447c-8f32-8ca629a28861"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.255245 4933 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.255298 4933 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8f90456-f375-447c-8f32-8ca629a28861-pod-info\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.255330 4933 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-server-conf\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.255343 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.255356 4933 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8f90456-f375-447c-8f32-8ca629a28861-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.255415 4933 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.255433 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79bgq\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-kube-api-access-79bgq\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.255451 4933 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8f90456-f375-447c-8f32-8ca629a28861-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.292331 4933 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.345973 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b8f90456-f375-447c-8f32-8ca629a28861" (UID: "b8f90456-f375-447c-8f32-8ca629a28861"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.357237 4933 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8f90456-f375-447c-8f32-8ca629a28861-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.357282 4933 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.522289 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.755730 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8f90456-f375-447c-8f32-8ca629a28861","Type":"ContainerDied","Data":"d461da9c89612e7a1026ab43c9ed5302c0af2c1a4941a7d4082105bc8f9d10fe"} Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.756414 4933 scope.go:117] "RemoveContainer" containerID="623eb7fa1a62caea1200abf885cf68400135d41dca63d2762217dce664ca47fd" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.756698 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.798696 4933 scope.go:117] "RemoveContainer" containerID="649eb745891b3ba68ed59fafd553564f944a61857c2db3028ded94f18160e91a" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.817675 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.844289 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.866761 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:55:48 crc kubenswrapper[4933]: E1201 09:55:48.867891 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f90456-f375-447c-8f32-8ca629a28861" containerName="setup-container" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.868337 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f90456-f375-447c-8f32-8ca629a28861" containerName="setup-container" Dec 01 09:55:48 crc kubenswrapper[4933]: E1201 09:55:48.868453 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f90456-f375-447c-8f32-8ca629a28861" containerName="rabbitmq" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.868525 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f90456-f375-447c-8f32-8ca629a28861" containerName="rabbitmq" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.868926 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8f90456-f375-447c-8f32-8ca629a28861" containerName="rabbitmq" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.870471 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.890421 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r782b" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.890951 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.891204 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.891498 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.891712 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.891920 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.892389 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.893412 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.910011 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.976193 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ac85014-ac29-45f6-9461-a8c02c4fcca4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.976267 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ac85014-ac29-45f6-9461-a8c02c4fcca4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.976322 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ac85014-ac29-45f6-9461-a8c02c4fcca4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.976670 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ac85014-ac29-45f6-9461-a8c02c4fcca4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.976751 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.976872 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ac85014-ac29-45f6-9461-a8c02c4fcca4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.976909 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ac85014-ac29-45f6-9461-a8c02c4fcca4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.976985 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ac85014-ac29-45f6-9461-a8c02c4fcca4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.977427 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ac85014-ac29-45f6-9461-a8c02c4fcca4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.977622 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ac85014-ac29-45f6-9461-a8c02c4fcca4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:48 crc kubenswrapper[4933]: I1201 09:55:48.977837 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6zlk\" (UniqueName: \"kubernetes.io/projected/3ac85014-ac29-45f6-9461-a8c02c4fcca4-kube-api-access-f6zlk\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.080380 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ac85014-ac29-45f6-9461-a8c02c4fcca4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.080902 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ac85014-ac29-45f6-9461-a8c02c4fcca4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.080952 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ac85014-ac29-45f6-9461-a8c02c4fcca4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.081015 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ac85014-ac29-45f6-9461-a8c02c4fcca4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.081093 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ac85014-ac29-45f6-9461-a8c02c4fcca4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.081174 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6zlk\" (UniqueName: \"kubernetes.io/projected/3ac85014-ac29-45f6-9461-a8c02c4fcca4-kube-api-access-f6zlk\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.081246 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ac85014-ac29-45f6-9461-a8c02c4fcca4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.081275 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ac85014-ac29-45f6-9461-a8c02c4fcca4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.081331 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ac85014-ac29-45f6-9461-a8c02c4fcca4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.081422 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ac85014-ac29-45f6-9461-a8c02c4fcca4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.081463 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.081754 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ac85014-ac29-45f6-9461-a8c02c4fcca4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.081873 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.082945 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ac85014-ac29-45f6-9461-a8c02c4fcca4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.083278 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ac85014-ac29-45f6-9461-a8c02c4fcca4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.084078 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ac85014-ac29-45f6-9461-a8c02c4fcca4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.084154 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ac85014-ac29-45f6-9461-a8c02c4fcca4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.088535 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ac85014-ac29-45f6-9461-a8c02c4fcca4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.089800 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ac85014-ac29-45f6-9461-a8c02c4fcca4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.089838 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ac85014-ac29-45f6-9461-a8c02c4fcca4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.089835 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ac85014-ac29-45f6-9461-a8c02c4fcca4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.105661 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6zlk\" (UniqueName: \"kubernetes.io/projected/3ac85014-ac29-45f6-9461-a8c02c4fcca4-kube-api-access-f6zlk\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.128038 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ac85014-ac29-45f6-9461-a8c02c4fcca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.207454 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.690856 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" path="/var/lib/kubelet/pods/3d9a36ba-b2c3-4f85-96d6-608d8e9749ec/volumes" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.692557 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8f90456-f375-447c-8f32-8ca629a28861" path="/var/lib/kubelet/pods/b8f90456-f375-447c-8f32-8ca629a28861/volumes" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.718351 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gzsxv"] Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.720529 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.725221 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.764333 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gzsxv"] Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.791122 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4b205db3-c812-4f4e-a81c-3662f2ca0cf1","Type":"ContainerStarted","Data":"998281b80167443cab000e4332b82438857e469db38b150b503a30f659e54461"} Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.795450 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.910766 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.911422 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.911484 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-config\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.911521 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.911575 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.911650 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk8xc\" (UniqueName: \"kubernetes.io/projected/56c688b6-fe34-4071-8a4c-5427156b0e39-kube-api-access-bk8xc\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:49 crc kubenswrapper[4933]: I1201 09:55:49.911735 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.015037 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.013986 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.015561 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.016422 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-config\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.017389 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.016218 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.018029 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.017281 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-config\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.018447 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.019044 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.019333 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk8xc\" (UniqueName: \"kubernetes.io/projected/56c688b6-fe34-4071-8a4c-5427156b0e39-kube-api-access-bk8xc\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.019983 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.020716 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.075755 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk8xc\" (UniqueName: \"kubernetes.io/projected/56c688b6-fe34-4071-8a4c-5427156b0e39-kube-api-access-bk8xc\") pod \"dnsmasq-dns-79bd4cc8c9-gzsxv\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.357330 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:50 crc kubenswrapper[4933]: I1201 09:55:50.804285 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3ac85014-ac29-45f6-9461-a8c02c4fcca4","Type":"ContainerStarted","Data":"dfa04affae093478c2307f2db56a0d04f48c1cc8761c4e7c9fe74a0fb1b64c49"} Dec 01 09:55:51 crc kubenswrapper[4933]: I1201 09:55:51.010888 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gzsxv"] Dec 01 09:55:51 crc kubenswrapper[4933]: W1201 09:55:51.013592 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56c688b6_fe34_4071_8a4c_5427156b0e39.slice/crio-06c1fe3aec407a18039a1fbf9270a742cb1d1fc5cd318f9f93d1f7444c2fa1cf WatchSource:0}: Error finding container 06c1fe3aec407a18039a1fbf9270a742cb1d1fc5cd318f9f93d1f7444c2fa1cf: Status 404 returned error can't find the container with id 06c1fe3aec407a18039a1fbf9270a742cb1d1fc5cd318f9f93d1f7444c2fa1cf Dec 01 09:55:51 crc kubenswrapper[4933]: I1201 09:55:51.819187 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4b205db3-c812-4f4e-a81c-3662f2ca0cf1","Type":"ContainerStarted","Data":"5081e30181a75ab4674854d205e4734a8b8c738d57beddb23f72e79a5da02363"} Dec 01 09:55:51 crc kubenswrapper[4933]: I1201 09:55:51.823557 4933 generic.go:334] "Generic (PLEG): container finished" podID="56c688b6-fe34-4071-8a4c-5427156b0e39" containerID="ab7fe5501910caf0c5c283917b32304fb79b79b725e969c8907da2a6bdd0fb0f" exitCode=0 Dec 01 09:55:51 crc kubenswrapper[4933]: I1201 09:55:51.823649 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" event={"ID":"56c688b6-fe34-4071-8a4c-5427156b0e39","Type":"ContainerDied","Data":"ab7fe5501910caf0c5c283917b32304fb79b79b725e969c8907da2a6bdd0fb0f"} Dec 01 09:55:51 crc kubenswrapper[4933]: I1201 09:55:51.823755 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" event={"ID":"56c688b6-fe34-4071-8a4c-5427156b0e39","Type":"ContainerStarted","Data":"06c1fe3aec407a18039a1fbf9270a742cb1d1fc5cd318f9f93d1f7444c2fa1cf"} Dec 01 09:55:51 crc kubenswrapper[4933]: I1201 09:55:51.926216 4933 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3d9a36ba-b2c3-4f85-96d6-608d8e9749ec" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: i/o timeout" Dec 01 09:55:52 crc kubenswrapper[4933]: I1201 09:55:52.837043 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3ac85014-ac29-45f6-9461-a8c02c4fcca4","Type":"ContainerStarted","Data":"e2b8dbb54720312ef8b342defaa9f61bec215d4510e00c3a55450f0abc9d4504"} Dec 01 09:55:52 crc kubenswrapper[4933]: I1201 09:55:52.842451 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" event={"ID":"56c688b6-fe34-4071-8a4c-5427156b0e39","Type":"ContainerStarted","Data":"42418697908f471be31dca919a1b30b79a921afa8f968f6de2ed1c2d5db5bb10"} Dec 01 09:55:52 crc kubenswrapper[4933]: I1201 09:55:52.842591 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:55:52 crc kubenswrapper[4933]: I1201 09:55:52.901629 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" podStartSLOduration=3.901612055 podStartE2EDuration="3.901612055s" podCreationTimestamp="2025-12-01 09:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:55:52.893473955 +0000 UTC m=+1443.535197580" watchObservedRunningTime="2025-12-01 09:55:52.901612055 +0000 UTC m=+1443.543335670" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.359510 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.427731 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-vt8jx"] Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.428400 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" podUID="d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" containerName="dnsmasq-dns" containerID="cri-o://7dda46422342fa46621cbb66b446a819140f58c7205d495eddf9114f4bad4578" gracePeriod=10 Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.572688 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-c5hhz"] Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.578749 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.583698 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.583760 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.583780 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-dns-svc\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.583810 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-config\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.583881 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkqgn\" (UniqueName: \"kubernetes.io/projected/eab25613-97d2-4420-875f-c5b71e62357f-kube-api-access-zkqgn\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.583918 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.583938 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.613705 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-c5hhz"] Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.685179 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.685288 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.685385 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.685417 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-dns-svc\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.685458 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-config\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.685568 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkqgn\" (UniqueName: \"kubernetes.io/projected/eab25613-97d2-4420-875f-c5b71e62357f-kube-api-access-zkqgn\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.685626 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.686350 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.686892 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.687153 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.687326 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-config\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.687374 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-dns-svc\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.687443 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eab25613-97d2-4420-875f-c5b71e62357f-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.716967 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkqgn\" (UniqueName: \"kubernetes.io/projected/eab25613-97d2-4420-875f-c5b71e62357f-kube-api-access-zkqgn\") pod \"dnsmasq-dns-55478c4467-c5hhz\" (UID: \"eab25613-97d2-4420-875f-c5b71e62357f\") " pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.932212 4933 generic.go:334] "Generic (PLEG): container finished" podID="d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" containerID="7dda46422342fa46621cbb66b446a819140f58c7205d495eddf9114f4bad4578" exitCode=0 Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.932322 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" event={"ID":"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8","Type":"ContainerDied","Data":"7dda46422342fa46621cbb66b446a819140f58c7205d495eddf9114f4bad4578"} Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.932666 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" event={"ID":"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8","Type":"ContainerDied","Data":"3d315677482ce0ce2c9d3803ca037d68867533e9fac4f2876206e1bb6e07d7d4"} Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.932689 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d315677482ce0ce2c9d3803ca037d68867533e9fac4f2876206e1bb6e07d7d4" Dec 01 09:56:00 crc kubenswrapper[4933]: I1201 09:56:00.936805 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.051191 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.200376 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-dns-svc\") pod \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.200501 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-ovsdbserver-nb\") pod \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.200672 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w46l\" (UniqueName: \"kubernetes.io/projected/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-kube-api-access-4w46l\") pod \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.200703 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-config\") pod \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.200842 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-ovsdbserver-sb\") pod \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.200925 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-dns-swift-storage-0\") pod \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\" (UID: \"d2c9ef2a-a283-4e71-8d32-03bcc513b6e8\") " Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.208465 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-kube-api-access-4w46l" (OuterVolumeSpecName: "kube-api-access-4w46l") pod "d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" (UID: "d2c9ef2a-a283-4e71-8d32-03bcc513b6e8"). InnerVolumeSpecName "kube-api-access-4w46l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.263988 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" (UID: "d2c9ef2a-a283-4e71-8d32-03bcc513b6e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.264020 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" (UID: "d2c9ef2a-a283-4e71-8d32-03bcc513b6e8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.272743 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-config" (OuterVolumeSpecName: "config") pod "d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" (UID: "d2c9ef2a-a283-4e71-8d32-03bcc513b6e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.278722 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" (UID: "d2c9ef2a-a283-4e71-8d32-03bcc513b6e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.281775 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" (UID: "d2c9ef2a-a283-4e71-8d32-03bcc513b6e8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.303850 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w46l\" (UniqueName: \"kubernetes.io/projected/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-kube-api-access-4w46l\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.303931 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.303947 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.303969 4933 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.303986 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.304000 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.448033 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-c5hhz"] Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.945103 4933 generic.go:334] "Generic (PLEG): container finished" podID="eab25613-97d2-4420-875f-c5b71e62357f" containerID="f5870640855a4f5362c9039ad91624fa389e7538963a73ff41871572a0666715" exitCode=0 Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.945298 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-c5hhz" event={"ID":"eab25613-97d2-4420-875f-c5b71e62357f","Type":"ContainerDied","Data":"f5870640855a4f5362c9039ad91624fa389e7538963a73ff41871572a0666715"} Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.945592 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-vt8jx" Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.945637 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-c5hhz" event={"ID":"eab25613-97d2-4420-875f-c5b71e62357f","Type":"ContainerStarted","Data":"ac36b8c7bcadbd2e2524fb1d34df6d2c767fefab0cfc26738ece9c8b522ff8e8"} Dec 01 09:56:01 crc kubenswrapper[4933]: I1201 09:56:01.995460 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-vt8jx"] Dec 01 09:56:02 crc kubenswrapper[4933]: I1201 09:56:02.003812 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-vt8jx"] Dec 01 09:56:02 crc kubenswrapper[4933]: I1201 09:56:02.981125 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-c5hhz" event={"ID":"eab25613-97d2-4420-875f-c5b71e62357f","Type":"ContainerStarted","Data":"4298a90aa7fd9e5cab5fd373c95e1b4e69d02f290f6f3058d78a89a8028cf1e1"} Dec 01 09:56:02 crc kubenswrapper[4933]: I1201 09:56:02.982130 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:03 crc kubenswrapper[4933]: I1201 09:56:03.012842 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-c5hhz" podStartSLOduration=3.012811709 podStartE2EDuration="3.012811709s" podCreationTimestamp="2025-12-01 09:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:56:03.007597692 +0000 UTC m=+1453.649321317" watchObservedRunningTime="2025-12-01 09:56:03.012811709 +0000 UTC m=+1453.654535324" Dec 01 09:56:03 crc kubenswrapper[4933]: I1201 09:56:03.681298 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" path="/var/lib/kubelet/pods/d2c9ef2a-a283-4e71-8d32-03bcc513b6e8/volumes" Dec 01 09:56:10 crc kubenswrapper[4933]: I1201 09:56:10.938285 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-c5hhz" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.018006 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gzsxv"] Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.018455 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" podUID="56c688b6-fe34-4071-8a4c-5427156b0e39" containerName="dnsmasq-dns" containerID="cri-o://42418697908f471be31dca919a1b30b79a921afa8f968f6de2ed1c2d5db5bb10" gracePeriod=10 Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.616390 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.676630 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-openstack-edpm-ipam\") pod \"56c688b6-fe34-4071-8a4c-5427156b0e39\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.676715 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-ovsdbserver-nb\") pod \"56c688b6-fe34-4071-8a4c-5427156b0e39\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.677013 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk8xc\" (UniqueName: \"kubernetes.io/projected/56c688b6-fe34-4071-8a4c-5427156b0e39-kube-api-access-bk8xc\") pod \"56c688b6-fe34-4071-8a4c-5427156b0e39\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.677116 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-dns-svc\") pod \"56c688b6-fe34-4071-8a4c-5427156b0e39\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.677183 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-dns-swift-storage-0\") pod \"56c688b6-fe34-4071-8a4c-5427156b0e39\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.677468 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-ovsdbserver-sb\") pod \"56c688b6-fe34-4071-8a4c-5427156b0e39\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.677531 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-config\") pod \"56c688b6-fe34-4071-8a4c-5427156b0e39\" (UID: \"56c688b6-fe34-4071-8a4c-5427156b0e39\") " Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.686550 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c688b6-fe34-4071-8a4c-5427156b0e39-kube-api-access-bk8xc" (OuterVolumeSpecName: "kube-api-access-bk8xc") pod "56c688b6-fe34-4071-8a4c-5427156b0e39" (UID: "56c688b6-fe34-4071-8a4c-5427156b0e39"). InnerVolumeSpecName "kube-api-access-bk8xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.739922 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "56c688b6-fe34-4071-8a4c-5427156b0e39" (UID: "56c688b6-fe34-4071-8a4c-5427156b0e39"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.742826 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.742930 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.744370 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56c688b6-fe34-4071-8a4c-5427156b0e39" (UID: "56c688b6-fe34-4071-8a4c-5427156b0e39"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.745795 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56c688b6-fe34-4071-8a4c-5427156b0e39" (UID: "56c688b6-fe34-4071-8a4c-5427156b0e39"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.754181 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56c688b6-fe34-4071-8a4c-5427156b0e39" (UID: "56c688b6-fe34-4071-8a4c-5427156b0e39"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.759576 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-config" (OuterVolumeSpecName: "config") pod "56c688b6-fe34-4071-8a4c-5427156b0e39" (UID: "56c688b6-fe34-4071-8a4c-5427156b0e39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.785749 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.785802 4933 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.785815 4933 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.785826 4933 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.785840 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk8xc\" (UniqueName: \"kubernetes.io/projected/56c688b6-fe34-4071-8a4c-5427156b0e39-kube-api-access-bk8xc\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.785854 4933 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.787360 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56c688b6-fe34-4071-8a4c-5427156b0e39" (UID: "56c688b6-fe34-4071-8a4c-5427156b0e39"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.807049 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.808673 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8505b08ab0c6a32f9d9b3cdadd9a40ce10f6aaa716925824a170840b097c0cb7"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.808768 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://8505b08ab0c6a32f9d9b3cdadd9a40ce10f6aaa716925824a170840b097c0cb7" gracePeriod=600 Dec 01 09:56:11 crc kubenswrapper[4933]: I1201 09:56:11.889493 4933 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c688b6-fe34-4071-8a4c-5427156b0e39-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.099304 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="8505b08ab0c6a32f9d9b3cdadd9a40ce10f6aaa716925824a170840b097c0cb7" exitCode=0 Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.099429 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"8505b08ab0c6a32f9d9b3cdadd9a40ce10f6aaa716925824a170840b097c0cb7"} Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.099579 4933 scope.go:117] "RemoveContainer" containerID="9a07b0704942b02814f9e2cbae890ab2665cd7af25f4178c9003a8e4c8ac846a" Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.104932 4933 generic.go:334] "Generic (PLEG): container finished" podID="56c688b6-fe34-4071-8a4c-5427156b0e39" containerID="42418697908f471be31dca919a1b30b79a921afa8f968f6de2ed1c2d5db5bb10" exitCode=0 Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.104978 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" event={"ID":"56c688b6-fe34-4071-8a4c-5427156b0e39","Type":"ContainerDied","Data":"42418697908f471be31dca919a1b30b79a921afa8f968f6de2ed1c2d5db5bb10"} Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.105023 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" event={"ID":"56c688b6-fe34-4071-8a4c-5427156b0e39","Type":"ContainerDied","Data":"06c1fe3aec407a18039a1fbf9270a742cb1d1fc5cd318f9f93d1f7444c2fa1cf"} Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.105078 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gzsxv" Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.143104 4933 scope.go:117] "RemoveContainer" containerID="42418697908f471be31dca919a1b30b79a921afa8f968f6de2ed1c2d5db5bb10" Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.152539 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gzsxv"] Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.164636 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gzsxv"] Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.227766 4933 scope.go:117] "RemoveContainer" containerID="ab7fe5501910caf0c5c283917b32304fb79b79b725e969c8907da2a6bdd0fb0f" Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.252259 4933 scope.go:117] "RemoveContainer" containerID="42418697908f471be31dca919a1b30b79a921afa8f968f6de2ed1c2d5db5bb10" Dec 01 09:56:12 crc kubenswrapper[4933]: E1201 09:56:12.253038 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42418697908f471be31dca919a1b30b79a921afa8f968f6de2ed1c2d5db5bb10\": container with ID starting with 42418697908f471be31dca919a1b30b79a921afa8f968f6de2ed1c2d5db5bb10 not found: ID does not exist" containerID="42418697908f471be31dca919a1b30b79a921afa8f968f6de2ed1c2d5db5bb10" Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.253115 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42418697908f471be31dca919a1b30b79a921afa8f968f6de2ed1c2d5db5bb10"} err="failed to get container status \"42418697908f471be31dca919a1b30b79a921afa8f968f6de2ed1c2d5db5bb10\": rpc error: code = NotFound desc = could not find container \"42418697908f471be31dca919a1b30b79a921afa8f968f6de2ed1c2d5db5bb10\": container with ID starting with 42418697908f471be31dca919a1b30b79a921afa8f968f6de2ed1c2d5db5bb10 not found: ID does not exist" Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.253159 4933 scope.go:117] "RemoveContainer" containerID="ab7fe5501910caf0c5c283917b32304fb79b79b725e969c8907da2a6bdd0fb0f" Dec 01 09:56:12 crc kubenswrapper[4933]: E1201 09:56:12.253803 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab7fe5501910caf0c5c283917b32304fb79b79b725e969c8907da2a6bdd0fb0f\": container with ID starting with ab7fe5501910caf0c5c283917b32304fb79b79b725e969c8907da2a6bdd0fb0f not found: ID does not exist" containerID="ab7fe5501910caf0c5c283917b32304fb79b79b725e969c8907da2a6bdd0fb0f" Dec 01 09:56:12 crc kubenswrapper[4933]: I1201 09:56:12.253881 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7fe5501910caf0c5c283917b32304fb79b79b725e969c8907da2a6bdd0fb0f"} err="failed to get container status \"ab7fe5501910caf0c5c283917b32304fb79b79b725e969c8907da2a6bdd0fb0f\": rpc error: code = NotFound desc = could not find container \"ab7fe5501910caf0c5c283917b32304fb79b79b725e969c8907da2a6bdd0fb0f\": container with ID starting with ab7fe5501910caf0c5c283917b32304fb79b79b725e969c8907da2a6bdd0fb0f not found: ID does not exist" Dec 01 09:56:13 crc kubenswrapper[4933]: I1201 09:56:13.126777 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417"} Dec 01 09:56:13 crc kubenswrapper[4933]: I1201 09:56:13.681708 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c688b6-fe34-4071-8a4c-5427156b0e39" path="/var/lib/kubelet/pods/56c688b6-fe34-4071-8a4c-5427156b0e39/volumes" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.247258 4933 generic.go:334] "Generic (PLEG): container finished" podID="4b205db3-c812-4f4e-a81c-3662f2ca0cf1" containerID="5081e30181a75ab4674854d205e4734a8b8c738d57beddb23f72e79a5da02363" exitCode=0 Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.247377 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4b205db3-c812-4f4e-a81c-3662f2ca0cf1","Type":"ContainerDied","Data":"5081e30181a75ab4674854d205e4734a8b8c738d57beddb23f72e79a5da02363"} Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.374293 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94"] Dec 01 09:56:24 crc kubenswrapper[4933]: E1201 09:56:24.374907 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" containerName="dnsmasq-dns" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.374936 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" containerName="dnsmasq-dns" Dec 01 09:56:24 crc kubenswrapper[4933]: E1201 09:56:24.374958 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" containerName="init" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.374967 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" containerName="init" Dec 01 09:56:24 crc kubenswrapper[4933]: E1201 09:56:24.374994 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c688b6-fe34-4071-8a4c-5427156b0e39" containerName="dnsmasq-dns" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.375003 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c688b6-fe34-4071-8a4c-5427156b0e39" containerName="dnsmasq-dns" Dec 01 09:56:24 crc kubenswrapper[4933]: E1201 09:56:24.375036 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c688b6-fe34-4071-8a4c-5427156b0e39" containerName="init" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.375042 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c688b6-fe34-4071-8a4c-5427156b0e39" containerName="init" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.375280 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c9ef2a-a283-4e71-8d32-03bcc513b6e8" containerName="dnsmasq-dns" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.375340 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c688b6-fe34-4071-8a4c-5427156b0e39" containerName="dnsmasq-dns" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.376278 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.379195 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.382095 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.382559 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.385546 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.397428 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94"] Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.400495 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv597\" (UniqueName: \"kubernetes.io/projected/af261b96-cdfe-4987-8689-bec0506287d2-kube-api-access-pv597\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grj94\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.400695 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grj94\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.401297 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grj94\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.401614 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grj94\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.504079 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv597\" (UniqueName: \"kubernetes.io/projected/af261b96-cdfe-4987-8689-bec0506287d2-kube-api-access-pv597\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grj94\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.504555 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grj94\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.504602 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grj94\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.504676 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grj94\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.510352 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grj94\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.513607 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grj94\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.514651 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grj94\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.532630 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv597\" (UniqueName: \"kubernetes.io/projected/af261b96-cdfe-4987-8689-bec0506287d2-kube-api-access-pv597\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grj94\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:24 crc kubenswrapper[4933]: I1201 09:56:24.613228 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:25 crc kubenswrapper[4933]: I1201 09:56:25.253015 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94"] Dec 01 09:56:25 crc kubenswrapper[4933]: W1201 09:56:25.262214 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf261b96_cdfe_4987_8689_bec0506287d2.slice/crio-d857e6eadad8562990043472cc8ecd12b66e8f6394905669f54c52541daa1148 WatchSource:0}: Error finding container d857e6eadad8562990043472cc8ecd12b66e8f6394905669f54c52541daa1148: Status 404 returned error can't find the container with id d857e6eadad8562990043472cc8ecd12b66e8f6394905669f54c52541daa1148 Dec 01 09:56:25 crc kubenswrapper[4933]: I1201 09:56:25.262701 4933 generic.go:334] "Generic (PLEG): container finished" podID="3ac85014-ac29-45f6-9461-a8c02c4fcca4" containerID="e2b8dbb54720312ef8b342defaa9f61bec215d4510e00c3a55450f0abc9d4504" exitCode=0 Dec 01 09:56:25 crc kubenswrapper[4933]: I1201 09:56:25.262755 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3ac85014-ac29-45f6-9461-a8c02c4fcca4","Type":"ContainerDied","Data":"e2b8dbb54720312ef8b342defaa9f61bec215d4510e00c3a55450f0abc9d4504"} Dec 01 09:56:25 crc kubenswrapper[4933]: I1201 09:56:25.271677 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4b205db3-c812-4f4e-a81c-3662f2ca0cf1","Type":"ContainerStarted","Data":"53f7b94c251dfff59486848f46f15e3dc66ac767cb3b036e15b8d4e748876773"} Dec 01 09:56:25 crc kubenswrapper[4933]: I1201 09:56:25.272667 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 01 09:56:25 crc kubenswrapper[4933]: I1201 09:56:25.334454 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.334427036 podStartE2EDuration="38.334427036s" podCreationTimestamp="2025-12-01 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:56:25.332506228 +0000 UTC m=+1475.974229863" watchObservedRunningTime="2025-12-01 09:56:25.334427036 +0000 UTC m=+1475.976150651" Dec 01 09:56:26 crc kubenswrapper[4933]: I1201 09:56:26.284811 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" event={"ID":"af261b96-cdfe-4987-8689-bec0506287d2","Type":"ContainerStarted","Data":"d857e6eadad8562990043472cc8ecd12b66e8f6394905669f54c52541daa1148"} Dec 01 09:56:26 crc kubenswrapper[4933]: I1201 09:56:26.289374 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3ac85014-ac29-45f6-9461-a8c02c4fcca4","Type":"ContainerStarted","Data":"e6d7c478f8e9872415f5b7c7294541b362c388825486ad90abf308da4590b78a"} Dec 01 09:56:26 crc kubenswrapper[4933]: I1201 09:56:26.323739 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.323704673 podStartE2EDuration="38.323704673s" podCreationTimestamp="2025-12-01 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:56:26.317370038 +0000 UTC m=+1476.959093653" watchObservedRunningTime="2025-12-01 09:56:26.323704673 +0000 UTC m=+1476.965428278" Dec 01 09:56:29 crc kubenswrapper[4933]: I1201 09:56:29.208218 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:56:36 crc kubenswrapper[4933]: I1201 09:56:36.408907 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" event={"ID":"af261b96-cdfe-4987-8689-bec0506287d2","Type":"ContainerStarted","Data":"09ab35caa2723ebac575fbc90d7422cdda5cfae7ba5effba4145f5278f5ee5d6"} Dec 01 09:56:36 crc kubenswrapper[4933]: I1201 09:56:36.429446 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" podStartSLOduration=1.5633522530000001 podStartE2EDuration="12.429420594s" podCreationTimestamp="2025-12-01 09:56:24 +0000 UTC" firstStartedPulling="2025-12-01 09:56:25.267814447 +0000 UTC m=+1475.909538062" lastFinishedPulling="2025-12-01 09:56:36.133882788 +0000 UTC m=+1486.775606403" observedRunningTime="2025-12-01 09:56:36.429212639 +0000 UTC m=+1487.070936244" watchObservedRunningTime="2025-12-01 09:56:36.429420594 +0000 UTC m=+1487.071144209" Dec 01 09:56:38 crc kubenswrapper[4933]: I1201 09:56:38.529537 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 01 09:56:39 crc kubenswrapper[4933]: I1201 09:56:39.211689 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:56:48 crc kubenswrapper[4933]: I1201 09:56:48.549104 4933 generic.go:334] "Generic (PLEG): container finished" podID="af261b96-cdfe-4987-8689-bec0506287d2" containerID="09ab35caa2723ebac575fbc90d7422cdda5cfae7ba5effba4145f5278f5ee5d6" exitCode=0 Dec 01 09:56:48 crc kubenswrapper[4933]: I1201 09:56:48.549198 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" event={"ID":"af261b96-cdfe-4987-8689-bec0506287d2","Type":"ContainerDied","Data":"09ab35caa2723ebac575fbc90d7422cdda5cfae7ba5effba4145f5278f5ee5d6"} Dec 01 09:56:50 crc kubenswrapper[4933]: I1201 09:56:50.822656 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:50 crc kubenswrapper[4933]: I1201 09:56:50.925038 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-ssh-key\") pod \"af261b96-cdfe-4987-8689-bec0506287d2\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " Dec 01 09:56:50 crc kubenswrapper[4933]: I1201 09:56:50.925138 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-repo-setup-combined-ca-bundle\") pod \"af261b96-cdfe-4987-8689-bec0506287d2\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " Dec 01 09:56:50 crc kubenswrapper[4933]: I1201 09:56:50.925210 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv597\" (UniqueName: \"kubernetes.io/projected/af261b96-cdfe-4987-8689-bec0506287d2-kube-api-access-pv597\") pod \"af261b96-cdfe-4987-8689-bec0506287d2\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " Dec 01 09:56:50 crc kubenswrapper[4933]: I1201 09:56:50.925381 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-inventory\") pod \"af261b96-cdfe-4987-8689-bec0506287d2\" (UID: \"af261b96-cdfe-4987-8689-bec0506287d2\") " Dec 01 09:56:50 crc kubenswrapper[4933]: I1201 09:56:50.934447 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af261b96-cdfe-4987-8689-bec0506287d2-kube-api-access-pv597" (OuterVolumeSpecName: "kube-api-access-pv597") pod "af261b96-cdfe-4987-8689-bec0506287d2" (UID: "af261b96-cdfe-4987-8689-bec0506287d2"). InnerVolumeSpecName "kube-api-access-pv597". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:56:50 crc kubenswrapper[4933]: I1201 09:56:50.934886 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "af261b96-cdfe-4987-8689-bec0506287d2" (UID: "af261b96-cdfe-4987-8689-bec0506287d2"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:56:50 crc kubenswrapper[4933]: I1201 09:56:50.966320 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "af261b96-cdfe-4987-8689-bec0506287d2" (UID: "af261b96-cdfe-4987-8689-bec0506287d2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:56:50 crc kubenswrapper[4933]: I1201 09:56:50.974561 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-inventory" (OuterVolumeSpecName: "inventory") pod "af261b96-cdfe-4987-8689-bec0506287d2" (UID: "af261b96-cdfe-4987-8689-bec0506287d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.027940 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.028291 4933 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.035106 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv597\" (UniqueName: \"kubernetes.io/projected/af261b96-cdfe-4987-8689-bec0506287d2-kube-api-access-pv597\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.035169 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af261b96-cdfe-4987-8689-bec0506287d2-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.622937 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" event={"ID":"af261b96-cdfe-4987-8689-bec0506287d2","Type":"ContainerDied","Data":"d857e6eadad8562990043472cc8ecd12b66e8f6394905669f54c52541daa1148"} Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.623012 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d857e6eadad8562990043472cc8ecd12b66e8f6394905669f54c52541daa1148" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.623368 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grj94" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.931756 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78"] Dec 01 09:56:51 crc kubenswrapper[4933]: E1201 09:56:51.932368 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af261b96-cdfe-4987-8689-bec0506287d2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.932391 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="af261b96-cdfe-4987-8689-bec0506287d2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.932742 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="af261b96-cdfe-4987-8689-bec0506287d2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.933657 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.936714 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.936896 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.937983 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.942208 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:56:51 crc kubenswrapper[4933]: I1201 09:56:51.966266 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78"] Dec 01 09:56:52 crc kubenswrapper[4933]: I1201 09:56:52.058803 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9cb819c-73da-4725-aaca-3cac78b4670f-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tkd78\" (UID: \"d9cb819c-73da-4725-aaca-3cac78b4670f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" Dec 01 09:56:52 crc kubenswrapper[4933]: I1201 09:56:52.058906 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxvn8\" (UniqueName: \"kubernetes.io/projected/d9cb819c-73da-4725-aaca-3cac78b4670f-kube-api-access-rxvn8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tkd78\" (UID: \"d9cb819c-73da-4725-aaca-3cac78b4670f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" Dec 01 09:56:52 crc kubenswrapper[4933]: I1201 09:56:52.058944 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9cb819c-73da-4725-aaca-3cac78b4670f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tkd78\" (UID: \"d9cb819c-73da-4725-aaca-3cac78b4670f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" Dec 01 09:56:52 crc kubenswrapper[4933]: I1201 09:56:52.161614 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9cb819c-73da-4725-aaca-3cac78b4670f-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tkd78\" (UID: \"d9cb819c-73da-4725-aaca-3cac78b4670f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" Dec 01 09:56:52 crc kubenswrapper[4933]: I1201 09:56:52.162177 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxvn8\" (UniqueName: \"kubernetes.io/projected/d9cb819c-73da-4725-aaca-3cac78b4670f-kube-api-access-rxvn8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tkd78\" (UID: \"d9cb819c-73da-4725-aaca-3cac78b4670f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" Dec 01 09:56:52 crc kubenswrapper[4933]: I1201 09:56:52.162206 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9cb819c-73da-4725-aaca-3cac78b4670f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tkd78\" (UID: \"d9cb819c-73da-4725-aaca-3cac78b4670f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" Dec 01 09:56:52 crc kubenswrapper[4933]: I1201 09:56:52.174996 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9cb819c-73da-4725-aaca-3cac78b4670f-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tkd78\" (UID: \"d9cb819c-73da-4725-aaca-3cac78b4670f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" Dec 01 09:56:52 crc kubenswrapper[4933]: I1201 09:56:52.174996 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9cb819c-73da-4725-aaca-3cac78b4670f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tkd78\" (UID: \"d9cb819c-73da-4725-aaca-3cac78b4670f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" Dec 01 09:56:52 crc kubenswrapper[4933]: I1201 09:56:52.188360 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxvn8\" (UniqueName: \"kubernetes.io/projected/d9cb819c-73da-4725-aaca-3cac78b4670f-kube-api-access-rxvn8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tkd78\" (UID: \"d9cb819c-73da-4725-aaca-3cac78b4670f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" Dec 01 09:56:52 crc kubenswrapper[4933]: I1201 09:56:52.195780 4933 scope.go:117] "RemoveContainer" containerID="fe5ac427829548c437732fd34d0acfd8e388d54c19d9f5204a5e27bde57067f2" Dec 01 09:56:52 crc kubenswrapper[4933]: I1201 09:56:52.255103 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" Dec 01 09:56:52 crc kubenswrapper[4933]: I1201 09:56:52.826133 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78"] Dec 01 09:56:53 crc kubenswrapper[4933]: I1201 09:56:53.646498 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" event={"ID":"d9cb819c-73da-4725-aaca-3cac78b4670f","Type":"ContainerStarted","Data":"08c7926a5d565e9d7c1a08e3e1f2e4d2f81817356f4d40549413d46198ed6199"} Dec 01 09:56:54 crc kubenswrapper[4933]: I1201 09:56:54.662721 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" event={"ID":"d9cb819c-73da-4725-aaca-3cac78b4670f","Type":"ContainerStarted","Data":"bceb5a5908539a975ecc273993f16f5089198cb5b4ead2180bccbcefe6910a90"} Dec 01 09:56:54 crc kubenswrapper[4933]: I1201 09:56:54.696041 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" podStartSLOduration=3.134874625 podStartE2EDuration="3.696008375s" podCreationTimestamp="2025-12-01 09:56:51 +0000 UTC" firstStartedPulling="2025-12-01 09:56:52.830683629 +0000 UTC m=+1503.472407244" lastFinishedPulling="2025-12-01 09:56:53.391817389 +0000 UTC m=+1504.033540994" observedRunningTime="2025-12-01 09:56:54.683205612 +0000 UTC m=+1505.324929227" watchObservedRunningTime="2025-12-01 09:56:54.696008375 +0000 UTC m=+1505.337731990" Dec 01 09:56:56 crc kubenswrapper[4933]: I1201 09:56:56.694518 4933 generic.go:334] "Generic (PLEG): container finished" podID="d9cb819c-73da-4725-aaca-3cac78b4670f" containerID="bceb5a5908539a975ecc273993f16f5089198cb5b4ead2180bccbcefe6910a90" exitCode=0 Dec 01 09:56:56 crc kubenswrapper[4933]: I1201 09:56:56.694642 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" event={"ID":"d9cb819c-73da-4725-aaca-3cac78b4670f","Type":"ContainerDied","Data":"bceb5a5908539a975ecc273993f16f5089198cb5b4ead2180bccbcefe6910a90"} Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.189965 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.312718 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9cb819c-73da-4725-aaca-3cac78b4670f-ssh-key\") pod \"d9cb819c-73da-4725-aaca-3cac78b4670f\" (UID: \"d9cb819c-73da-4725-aaca-3cac78b4670f\") " Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.313141 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9cb819c-73da-4725-aaca-3cac78b4670f-inventory\") pod \"d9cb819c-73da-4725-aaca-3cac78b4670f\" (UID: \"d9cb819c-73da-4725-aaca-3cac78b4670f\") " Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.313360 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxvn8\" (UniqueName: \"kubernetes.io/projected/d9cb819c-73da-4725-aaca-3cac78b4670f-kube-api-access-rxvn8\") pod \"d9cb819c-73da-4725-aaca-3cac78b4670f\" (UID: \"d9cb819c-73da-4725-aaca-3cac78b4670f\") " Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.320440 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9cb819c-73da-4725-aaca-3cac78b4670f-kube-api-access-rxvn8" (OuterVolumeSpecName: "kube-api-access-rxvn8") pod "d9cb819c-73da-4725-aaca-3cac78b4670f" (UID: "d9cb819c-73da-4725-aaca-3cac78b4670f"). InnerVolumeSpecName "kube-api-access-rxvn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.344381 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9cb819c-73da-4725-aaca-3cac78b4670f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d9cb819c-73da-4725-aaca-3cac78b4670f" (UID: "d9cb819c-73da-4725-aaca-3cac78b4670f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.392262 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9cb819c-73da-4725-aaca-3cac78b4670f-inventory" (OuterVolumeSpecName: "inventory") pod "d9cb819c-73da-4725-aaca-3cac78b4670f" (UID: "d9cb819c-73da-4725-aaca-3cac78b4670f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.417626 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9cb819c-73da-4725-aaca-3cac78b4670f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.417661 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9cb819c-73da-4725-aaca-3cac78b4670f-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.417674 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxvn8\" (UniqueName: \"kubernetes.io/projected/d9cb819c-73da-4725-aaca-3cac78b4670f-kube-api-access-rxvn8\") on node \"crc\" DevicePath \"\"" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.718186 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" event={"ID":"d9cb819c-73da-4725-aaca-3cac78b4670f","Type":"ContainerDied","Data":"08c7926a5d565e9d7c1a08e3e1f2e4d2f81817356f4d40549413d46198ed6199"} Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.718244 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08c7926a5d565e9d7c1a08e3e1f2e4d2f81817356f4d40549413d46198ed6199" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.718269 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tkd78" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.926257 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd"] Dec 01 09:56:58 crc kubenswrapper[4933]: E1201 09:56:58.927029 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cb819c-73da-4725-aaca-3cac78b4670f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.927063 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cb819c-73da-4725-aaca-3cac78b4670f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.927389 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9cb819c-73da-4725-aaca-3cac78b4670f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.928469 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.933367 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.934358 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.934659 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.945493 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd"] Dec 01 09:56:58 crc kubenswrapper[4933]: I1201 09:56:58.948447 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.036948 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.037149 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.037566 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.037731 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdvxz\" (UniqueName: \"kubernetes.io/projected/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-kube-api-access-rdvxz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.140256 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.140863 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdvxz\" (UniqueName: \"kubernetes.io/projected/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-kube-api-access-rdvxz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.140927 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.141008 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.149574 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.150088 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.155481 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.180596 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdvxz\" (UniqueName: \"kubernetes.io/projected/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-kube-api-access-rdvxz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.249216 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 09:56:59 crc kubenswrapper[4933]: I1201 09:56:59.881177 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd"] Dec 01 09:57:00 crc kubenswrapper[4933]: I1201 09:57:00.745974 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" event={"ID":"32dfd9a4-8242-4931-a791-de1fc8b1d4a9","Type":"ContainerStarted","Data":"588c022a2c0edecb4eb91d034fdc4c22650b1143f57ce6810388d78703ab9aef"} Dec 01 09:57:01 crc kubenswrapper[4933]: I1201 09:57:01.755867 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" event={"ID":"32dfd9a4-8242-4931-a791-de1fc8b1d4a9","Type":"ContainerStarted","Data":"4dcb7d2f9bb8bbcb0473980c8d293b7d96f1cb496d49e7d5e4fba9d0eff6d18c"} Dec 01 09:57:01 crc kubenswrapper[4933]: I1201 09:57:01.781232 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" podStartSLOduration=3.088637051 podStartE2EDuration="3.781205645s" podCreationTimestamp="2025-12-01 09:56:58 +0000 UTC" firstStartedPulling="2025-12-01 09:56:59.891680127 +0000 UTC m=+1510.533403742" lastFinishedPulling="2025-12-01 09:57:00.584248721 +0000 UTC m=+1511.225972336" observedRunningTime="2025-12-01 09:57:01.77606844 +0000 UTC m=+1512.417792075" watchObservedRunningTime="2025-12-01 09:57:01.781205645 +0000 UTC m=+1512.422929260" Dec 01 09:57:20 crc kubenswrapper[4933]: I1201 09:57:20.025602 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-22v8s"] Dec 01 09:57:20 crc kubenswrapper[4933]: I1201 09:57:20.031695 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:20 crc kubenswrapper[4933]: I1201 09:57:20.052490 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22v8s"] Dec 01 09:57:20 crc kubenswrapper[4933]: I1201 09:57:20.099425 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-utilities\") pod \"community-operators-22v8s\" (UID: \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\") " pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:20 crc kubenswrapper[4933]: I1201 09:57:20.099610 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz5z5\" (UniqueName: \"kubernetes.io/projected/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-kube-api-access-xz5z5\") pod \"community-operators-22v8s\" (UID: \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\") " pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:20 crc kubenswrapper[4933]: I1201 09:57:20.099685 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-catalog-content\") pod \"community-operators-22v8s\" (UID: \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\") " pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:20 crc kubenswrapper[4933]: I1201 09:57:20.203013 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz5z5\" (UniqueName: \"kubernetes.io/projected/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-kube-api-access-xz5z5\") pod \"community-operators-22v8s\" (UID: \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\") " pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:20 crc kubenswrapper[4933]: I1201 09:57:20.203133 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-catalog-content\") pod \"community-operators-22v8s\" (UID: \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\") " pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:20 crc kubenswrapper[4933]: I1201 09:57:20.203249 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-utilities\") pod \"community-operators-22v8s\" (UID: \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\") " pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:20 crc kubenswrapper[4933]: I1201 09:57:20.203747 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-catalog-content\") pod \"community-operators-22v8s\" (UID: \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\") " pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:20 crc kubenswrapper[4933]: I1201 09:57:20.203875 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-utilities\") pod \"community-operators-22v8s\" (UID: \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\") " pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:20 crc kubenswrapper[4933]: I1201 09:57:20.226907 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz5z5\" (UniqueName: \"kubernetes.io/projected/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-kube-api-access-xz5z5\") pod \"community-operators-22v8s\" (UID: \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\") " pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:20 crc kubenswrapper[4933]: I1201 09:57:20.363459 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:21 crc kubenswrapper[4933]: I1201 09:57:21.168275 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22v8s"] Dec 01 09:57:21 crc kubenswrapper[4933]: I1201 09:57:21.973238 4933 generic.go:334] "Generic (PLEG): container finished" podID="ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" containerID="9baa44487919d739787bbf47d80830b4de01cb38f196ffc48424cd99c631e2e6" exitCode=0 Dec 01 09:57:21 crc kubenswrapper[4933]: I1201 09:57:21.973361 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22v8s" event={"ID":"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8","Type":"ContainerDied","Data":"9baa44487919d739787bbf47d80830b4de01cb38f196ffc48424cd99c631e2e6"} Dec 01 09:57:21 crc kubenswrapper[4933]: I1201 09:57:21.973882 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22v8s" event={"ID":"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8","Type":"ContainerStarted","Data":"9a11870a08c1634247250f3060b7bda510d393679d163d3dda284bba5f118091"} Dec 01 09:57:22 crc kubenswrapper[4933]: I1201 09:57:22.986711 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22v8s" event={"ID":"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8","Type":"ContainerStarted","Data":"a559ec0b2863ee69b91e6504f46afb5dd03f8cacbccab82b76f5f62dd4b5e9cf"} Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.001512 4933 generic.go:334] "Generic (PLEG): container finished" podID="ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" containerID="a559ec0b2863ee69b91e6504f46afb5dd03f8cacbccab82b76f5f62dd4b5e9cf" exitCode=0 Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.001585 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22v8s" event={"ID":"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8","Type":"ContainerDied","Data":"a559ec0b2863ee69b91e6504f46afb5dd03f8cacbccab82b76f5f62dd4b5e9cf"} Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.406859 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xccnw"] Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.424552 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.431571 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xccnw"] Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.561378 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f8eb42b-49ff-43b7-9d59-597ff62f0428-utilities\") pod \"redhat-marketplace-xccnw\" (UID: \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\") " pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.561596 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f8eb42b-49ff-43b7-9d59-597ff62f0428-catalog-content\") pod \"redhat-marketplace-xccnw\" (UID: \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\") " pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.561694 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2szxl\" (UniqueName: \"kubernetes.io/projected/1f8eb42b-49ff-43b7-9d59-597ff62f0428-kube-api-access-2szxl\") pod \"redhat-marketplace-xccnw\" (UID: \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\") " pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.664647 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f8eb42b-49ff-43b7-9d59-597ff62f0428-utilities\") pod \"redhat-marketplace-xccnw\" (UID: \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\") " pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.664731 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f8eb42b-49ff-43b7-9d59-597ff62f0428-catalog-content\") pod \"redhat-marketplace-xccnw\" (UID: \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\") " pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.664766 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2szxl\" (UniqueName: \"kubernetes.io/projected/1f8eb42b-49ff-43b7-9d59-597ff62f0428-kube-api-access-2szxl\") pod \"redhat-marketplace-xccnw\" (UID: \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\") " pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.665267 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f8eb42b-49ff-43b7-9d59-597ff62f0428-utilities\") pod \"redhat-marketplace-xccnw\" (UID: \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\") " pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.665612 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f8eb42b-49ff-43b7-9d59-597ff62f0428-catalog-content\") pod \"redhat-marketplace-xccnw\" (UID: \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\") " pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.706010 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2szxl\" (UniqueName: \"kubernetes.io/projected/1f8eb42b-49ff-43b7-9d59-597ff62f0428-kube-api-access-2szxl\") pod \"redhat-marketplace-xccnw\" (UID: \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\") " pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:24 crc kubenswrapper[4933]: I1201 09:57:24.750060 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:25 crc kubenswrapper[4933]: I1201 09:57:25.394480 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xccnw"] Dec 01 09:57:25 crc kubenswrapper[4933]: W1201 09:57:25.396072 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f8eb42b_49ff_43b7_9d59_597ff62f0428.slice/crio-2b852e1dd3620316d080984cbb9bc355808a0509ae1537750f3e9c8ffad8b062 WatchSource:0}: Error finding container 2b852e1dd3620316d080984cbb9bc355808a0509ae1537750f3e9c8ffad8b062: Status 404 returned error can't find the container with id 2b852e1dd3620316d080984cbb9bc355808a0509ae1537750f3e9c8ffad8b062 Dec 01 09:57:26 crc kubenswrapper[4933]: I1201 09:57:26.029707 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xccnw" event={"ID":"1f8eb42b-49ff-43b7-9d59-597ff62f0428","Type":"ContainerStarted","Data":"2848f7f31066f29b62657557a442e610931423733764bb5ac4687ca31d7247c7"} Dec 01 09:57:26 crc kubenswrapper[4933]: I1201 09:57:26.030208 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xccnw" event={"ID":"1f8eb42b-49ff-43b7-9d59-597ff62f0428","Type":"ContainerStarted","Data":"2b852e1dd3620316d080984cbb9bc355808a0509ae1537750f3e9c8ffad8b062"} Dec 01 09:57:26 crc kubenswrapper[4933]: I1201 09:57:26.033133 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22v8s" event={"ID":"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8","Type":"ContainerStarted","Data":"5928c494f40d6098e8765703098d8584d0075c89adb28ad42d10da3375b9a880"} Dec 01 09:57:26 crc kubenswrapper[4933]: I1201 09:57:26.091798 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-22v8s" podStartSLOduration=3.355895089 podStartE2EDuration="6.09177298s" podCreationTimestamp="2025-12-01 09:57:20 +0000 UTC" firstStartedPulling="2025-12-01 09:57:21.976459502 +0000 UTC m=+1532.618183117" lastFinishedPulling="2025-12-01 09:57:24.712337393 +0000 UTC m=+1535.354061008" observedRunningTime="2025-12-01 09:57:26.074937168 +0000 UTC m=+1536.716660783" watchObservedRunningTime="2025-12-01 09:57:26.09177298 +0000 UTC m=+1536.733496615" Dec 01 09:57:27 crc kubenswrapper[4933]: I1201 09:57:27.045637 4933 generic.go:334] "Generic (PLEG): container finished" podID="1f8eb42b-49ff-43b7-9d59-597ff62f0428" containerID="2848f7f31066f29b62657557a442e610931423733764bb5ac4687ca31d7247c7" exitCode=0 Dec 01 09:57:27 crc kubenswrapper[4933]: I1201 09:57:27.045714 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xccnw" event={"ID":"1f8eb42b-49ff-43b7-9d59-597ff62f0428","Type":"ContainerDied","Data":"2848f7f31066f29b62657557a442e610931423733764bb5ac4687ca31d7247c7"} Dec 01 09:57:28 crc kubenswrapper[4933]: I1201 09:57:28.057551 4933 generic.go:334] "Generic (PLEG): container finished" podID="1f8eb42b-49ff-43b7-9d59-597ff62f0428" containerID="c461b021133c75f50467e7db62ea156051f4422972c446ba685ff12715e353b9" exitCode=0 Dec 01 09:57:28 crc kubenswrapper[4933]: I1201 09:57:28.057667 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xccnw" event={"ID":"1f8eb42b-49ff-43b7-9d59-597ff62f0428","Type":"ContainerDied","Data":"c461b021133c75f50467e7db62ea156051f4422972c446ba685ff12715e353b9"} Dec 01 09:57:29 crc kubenswrapper[4933]: I1201 09:57:29.073005 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xccnw" event={"ID":"1f8eb42b-49ff-43b7-9d59-597ff62f0428","Type":"ContainerStarted","Data":"f1ce8899c61722a917b73cf9acfbb142c2e4bf9a1e90692e751b6b86e774697a"} Dec 01 09:57:29 crc kubenswrapper[4933]: I1201 09:57:29.100158 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xccnw" podStartSLOduration=3.532660528 podStartE2EDuration="5.100135853s" podCreationTimestamp="2025-12-01 09:57:24 +0000 UTC" firstStartedPulling="2025-12-01 09:57:27.049295981 +0000 UTC m=+1537.691019596" lastFinishedPulling="2025-12-01 09:57:28.616771306 +0000 UTC m=+1539.258494921" observedRunningTime="2025-12-01 09:57:29.093650425 +0000 UTC m=+1539.735374050" watchObservedRunningTime="2025-12-01 09:57:29.100135853 +0000 UTC m=+1539.741859468" Dec 01 09:57:30 crc kubenswrapper[4933]: I1201 09:57:30.364616 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:30 crc kubenswrapper[4933]: I1201 09:57:30.364977 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:30 crc kubenswrapper[4933]: I1201 09:57:30.415489 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.145277 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.622394 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-knm8k"] Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.624934 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.701427 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-knm8k"] Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.827363 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d112a80b-a3db-4671-98a3-21714cf53b98-catalog-content\") pod \"certified-operators-knm8k\" (UID: \"d112a80b-a3db-4671-98a3-21714cf53b98\") " pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.827570 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d112a80b-a3db-4671-98a3-21714cf53b98-utilities\") pod \"certified-operators-knm8k\" (UID: \"d112a80b-a3db-4671-98a3-21714cf53b98\") " pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.827645 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d997g\" (UniqueName: \"kubernetes.io/projected/d112a80b-a3db-4671-98a3-21714cf53b98-kube-api-access-d997g\") pod \"certified-operators-knm8k\" (UID: \"d112a80b-a3db-4671-98a3-21714cf53b98\") " pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.930530 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d112a80b-a3db-4671-98a3-21714cf53b98-utilities\") pod \"certified-operators-knm8k\" (UID: \"d112a80b-a3db-4671-98a3-21714cf53b98\") " pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.930909 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d997g\" (UniqueName: \"kubernetes.io/projected/d112a80b-a3db-4671-98a3-21714cf53b98-kube-api-access-d997g\") pod \"certified-operators-knm8k\" (UID: \"d112a80b-a3db-4671-98a3-21714cf53b98\") " pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.931124 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d112a80b-a3db-4671-98a3-21714cf53b98-catalog-content\") pod \"certified-operators-knm8k\" (UID: \"d112a80b-a3db-4671-98a3-21714cf53b98\") " pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.931203 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d112a80b-a3db-4671-98a3-21714cf53b98-utilities\") pod \"certified-operators-knm8k\" (UID: \"d112a80b-a3db-4671-98a3-21714cf53b98\") " pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.931750 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d112a80b-a3db-4671-98a3-21714cf53b98-catalog-content\") pod \"certified-operators-knm8k\" (UID: \"d112a80b-a3db-4671-98a3-21714cf53b98\") " pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.954622 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d997g\" (UniqueName: \"kubernetes.io/projected/d112a80b-a3db-4671-98a3-21714cf53b98-kube-api-access-d997g\") pod \"certified-operators-knm8k\" (UID: \"d112a80b-a3db-4671-98a3-21714cf53b98\") " pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:31 crc kubenswrapper[4933]: I1201 09:57:31.965372 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:32 crc kubenswrapper[4933]: I1201 09:57:32.531494 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-knm8k"] Dec 01 09:57:33 crc kubenswrapper[4933]: I1201 09:57:33.145227 4933 generic.go:334] "Generic (PLEG): container finished" podID="d112a80b-a3db-4671-98a3-21714cf53b98" containerID="66111c4ec1c92a72ae471af14941dfa50afd528c4b10e5f370cd4378394318a3" exitCode=0 Dec 01 09:57:33 crc kubenswrapper[4933]: I1201 09:57:33.145363 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knm8k" event={"ID":"d112a80b-a3db-4671-98a3-21714cf53b98","Type":"ContainerDied","Data":"66111c4ec1c92a72ae471af14941dfa50afd528c4b10e5f370cd4378394318a3"} Dec 01 09:57:33 crc kubenswrapper[4933]: I1201 09:57:33.145928 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knm8k" event={"ID":"d112a80b-a3db-4671-98a3-21714cf53b98","Type":"ContainerStarted","Data":"ce831521344fc33cf41acd81ab64ec648a9ef5bf0593118704dd3d00cd695ba3"} Dec 01 09:57:33 crc kubenswrapper[4933]: I1201 09:57:33.800656 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22v8s"] Dec 01 09:57:33 crc kubenswrapper[4933]: I1201 09:57:33.801765 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-22v8s" podUID="ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" containerName="registry-server" containerID="cri-o://5928c494f40d6098e8765703098d8584d0075c89adb28ad42d10da3375b9a880" gracePeriod=2 Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.159016 4933 generic.go:334] "Generic (PLEG): container finished" podID="ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" containerID="5928c494f40d6098e8765703098d8584d0075c89adb28ad42d10da3375b9a880" exitCode=0 Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.159075 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22v8s" event={"ID":"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8","Type":"ContainerDied","Data":"5928c494f40d6098e8765703098d8584d0075c89adb28ad42d10da3375b9a880"} Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.751002 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.751329 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.788694 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.816873 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.817393 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-utilities\") pod \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\" (UID: \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\") " Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.817447 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz5z5\" (UniqueName: \"kubernetes.io/projected/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-kube-api-access-xz5z5\") pod \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\" (UID: \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\") " Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.817646 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-catalog-content\") pod \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\" (UID: \"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8\") " Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.818181 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-utilities" (OuterVolumeSpecName: "utilities") pod "ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" (UID: "ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.824605 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-kube-api-access-xz5z5" (OuterVolumeSpecName: "kube-api-access-xz5z5") pod "ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" (UID: "ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8"). InnerVolumeSpecName "kube-api-access-xz5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.875018 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" (UID: "ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.919800 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.919841 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:57:34 crc kubenswrapper[4933]: I1201 09:57:34.919853 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz5z5\" (UniqueName: \"kubernetes.io/projected/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8-kube-api-access-xz5z5\") on node \"crc\" DevicePath \"\"" Dec 01 09:57:35 crc kubenswrapper[4933]: I1201 09:57:35.171760 4933 generic.go:334] "Generic (PLEG): container finished" podID="d112a80b-a3db-4671-98a3-21714cf53b98" containerID="a80f0ab921dc9b80f160c36ea7b3c2d4e03d81640a94655cffe7f43ca86f4675" exitCode=0 Dec 01 09:57:35 crc kubenswrapper[4933]: I1201 09:57:35.171868 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knm8k" event={"ID":"d112a80b-a3db-4671-98a3-21714cf53b98","Type":"ContainerDied","Data":"a80f0ab921dc9b80f160c36ea7b3c2d4e03d81640a94655cffe7f43ca86f4675"} Dec 01 09:57:35 crc kubenswrapper[4933]: I1201 09:57:35.181963 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22v8s" Dec 01 09:57:35 crc kubenswrapper[4933]: I1201 09:57:35.185539 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22v8s" event={"ID":"ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8","Type":"ContainerDied","Data":"9a11870a08c1634247250f3060b7bda510d393679d163d3dda284bba5f118091"} Dec 01 09:57:35 crc kubenswrapper[4933]: I1201 09:57:35.185639 4933 scope.go:117] "RemoveContainer" containerID="5928c494f40d6098e8765703098d8584d0075c89adb28ad42d10da3375b9a880" Dec 01 09:57:35 crc kubenswrapper[4933]: I1201 09:57:35.219765 4933 scope.go:117] "RemoveContainer" containerID="a559ec0b2863ee69b91e6504f46afb5dd03f8cacbccab82b76f5f62dd4b5e9cf" Dec 01 09:57:35 crc kubenswrapper[4933]: I1201 09:57:35.242370 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22v8s"] Dec 01 09:57:35 crc kubenswrapper[4933]: I1201 09:57:35.257019 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-22v8s"] Dec 01 09:57:35 crc kubenswrapper[4933]: I1201 09:57:35.259679 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:35 crc kubenswrapper[4933]: I1201 09:57:35.272497 4933 scope.go:117] "RemoveContainer" containerID="9baa44487919d739787bbf47d80830b4de01cb38f196ffc48424cd99c631e2e6" Dec 01 09:57:35 crc kubenswrapper[4933]: I1201 09:57:35.682706 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" path="/var/lib/kubelet/pods/ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8/volumes" Dec 01 09:57:37 crc kubenswrapper[4933]: I1201 09:57:37.210448 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knm8k" event={"ID":"d112a80b-a3db-4671-98a3-21714cf53b98","Type":"ContainerStarted","Data":"8a107df15141f0bc10ee41d451ccc2af06a5f91de3b7874d94b05216e83ebc3a"} Dec 01 09:57:37 crc kubenswrapper[4933]: I1201 09:57:37.239287 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-knm8k" podStartSLOduration=3.213282947 podStartE2EDuration="6.239265603s" podCreationTimestamp="2025-12-01 09:57:31 +0000 UTC" firstStartedPulling="2025-12-01 09:57:33.148406052 +0000 UTC m=+1543.790129667" lastFinishedPulling="2025-12-01 09:57:36.174388708 +0000 UTC m=+1546.816112323" observedRunningTime="2025-12-01 09:57:37.22808983 +0000 UTC m=+1547.869813455" watchObservedRunningTime="2025-12-01 09:57:37.239265603 +0000 UTC m=+1547.880989218" Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.000803 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xccnw"] Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.001880 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xccnw" podUID="1f8eb42b-49ff-43b7-9d59-597ff62f0428" containerName="registry-server" containerID="cri-o://f1ce8899c61722a917b73cf9acfbb142c2e4bf9a1e90692e751b6b86e774697a" gracePeriod=2 Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.229863 4933 generic.go:334] "Generic (PLEG): container finished" podID="1f8eb42b-49ff-43b7-9d59-597ff62f0428" containerID="f1ce8899c61722a917b73cf9acfbb142c2e4bf9a1e90692e751b6b86e774697a" exitCode=0 Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.230360 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xccnw" event={"ID":"1f8eb42b-49ff-43b7-9d59-597ff62f0428","Type":"ContainerDied","Data":"f1ce8899c61722a917b73cf9acfbb142c2e4bf9a1e90692e751b6b86e774697a"} Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.533906 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.697117 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f8eb42b-49ff-43b7-9d59-597ff62f0428-utilities\") pod \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\" (UID: \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\") " Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.697280 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f8eb42b-49ff-43b7-9d59-597ff62f0428-catalog-content\") pod \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\" (UID: \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\") " Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.697363 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2szxl\" (UniqueName: \"kubernetes.io/projected/1f8eb42b-49ff-43b7-9d59-597ff62f0428-kube-api-access-2szxl\") pod \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\" (UID: \"1f8eb42b-49ff-43b7-9d59-597ff62f0428\") " Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.697921 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f8eb42b-49ff-43b7-9d59-597ff62f0428-utilities" (OuterVolumeSpecName: "utilities") pod "1f8eb42b-49ff-43b7-9d59-597ff62f0428" (UID: "1f8eb42b-49ff-43b7-9d59-597ff62f0428"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.699055 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f8eb42b-49ff-43b7-9d59-597ff62f0428-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.726756 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8eb42b-49ff-43b7-9d59-597ff62f0428-kube-api-access-2szxl" (OuterVolumeSpecName: "kube-api-access-2szxl") pod "1f8eb42b-49ff-43b7-9d59-597ff62f0428" (UID: "1f8eb42b-49ff-43b7-9d59-597ff62f0428"). InnerVolumeSpecName "kube-api-access-2szxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.735567 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f8eb42b-49ff-43b7-9d59-597ff62f0428-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f8eb42b-49ff-43b7-9d59-597ff62f0428" (UID: "1f8eb42b-49ff-43b7-9d59-597ff62f0428"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.801648 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f8eb42b-49ff-43b7-9d59-597ff62f0428-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:57:38 crc kubenswrapper[4933]: I1201 09:57:38.801708 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2szxl\" (UniqueName: \"kubernetes.io/projected/1f8eb42b-49ff-43b7-9d59-597ff62f0428-kube-api-access-2szxl\") on node \"crc\" DevicePath \"\"" Dec 01 09:57:39 crc kubenswrapper[4933]: I1201 09:57:39.262966 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xccnw" event={"ID":"1f8eb42b-49ff-43b7-9d59-597ff62f0428","Type":"ContainerDied","Data":"2b852e1dd3620316d080984cbb9bc355808a0509ae1537750f3e9c8ffad8b062"} Dec 01 09:57:39 crc kubenswrapper[4933]: I1201 09:57:39.263069 4933 scope.go:117] "RemoveContainer" containerID="f1ce8899c61722a917b73cf9acfbb142c2e4bf9a1e90692e751b6b86e774697a" Dec 01 09:57:39 crc kubenswrapper[4933]: I1201 09:57:39.263368 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xccnw" Dec 01 09:57:39 crc kubenswrapper[4933]: I1201 09:57:39.298097 4933 scope.go:117] "RemoveContainer" containerID="c461b021133c75f50467e7db62ea156051f4422972c446ba685ff12715e353b9" Dec 01 09:57:39 crc kubenswrapper[4933]: I1201 09:57:39.310518 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xccnw"] Dec 01 09:57:39 crc kubenswrapper[4933]: I1201 09:57:39.319578 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xccnw"] Dec 01 09:57:39 crc kubenswrapper[4933]: I1201 09:57:39.323369 4933 scope.go:117] "RemoveContainer" containerID="2848f7f31066f29b62657557a442e610931423733764bb5ac4687ca31d7247c7" Dec 01 09:57:39 crc kubenswrapper[4933]: I1201 09:57:39.683650 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f8eb42b-49ff-43b7-9d59-597ff62f0428" path="/var/lib/kubelet/pods/1f8eb42b-49ff-43b7-9d59-597ff62f0428/volumes" Dec 01 09:57:41 crc kubenswrapper[4933]: I1201 09:57:41.966794 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:41 crc kubenswrapper[4933]: I1201 09:57:41.967137 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:42 crc kubenswrapper[4933]: I1201 09:57:42.014022 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:42 crc kubenswrapper[4933]: I1201 09:57:42.343768 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:43 crc kubenswrapper[4933]: I1201 09:57:43.198802 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-knm8k"] Dec 01 09:57:44 crc kubenswrapper[4933]: I1201 09:57:44.316994 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-knm8k" podUID="d112a80b-a3db-4671-98a3-21714cf53b98" containerName="registry-server" containerID="cri-o://8a107df15141f0bc10ee41d451ccc2af06a5f91de3b7874d94b05216e83ebc3a" gracePeriod=2 Dec 01 09:57:44 crc kubenswrapper[4933]: I1201 09:57:44.857198 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.042014 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d112a80b-a3db-4671-98a3-21714cf53b98-catalog-content\") pod \"d112a80b-a3db-4671-98a3-21714cf53b98\" (UID: \"d112a80b-a3db-4671-98a3-21714cf53b98\") " Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.042073 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d112a80b-a3db-4671-98a3-21714cf53b98-utilities\") pod \"d112a80b-a3db-4671-98a3-21714cf53b98\" (UID: \"d112a80b-a3db-4671-98a3-21714cf53b98\") " Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.042187 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d997g\" (UniqueName: \"kubernetes.io/projected/d112a80b-a3db-4671-98a3-21714cf53b98-kube-api-access-d997g\") pod \"d112a80b-a3db-4671-98a3-21714cf53b98\" (UID: \"d112a80b-a3db-4671-98a3-21714cf53b98\") " Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.043626 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d112a80b-a3db-4671-98a3-21714cf53b98-utilities" (OuterVolumeSpecName: "utilities") pod "d112a80b-a3db-4671-98a3-21714cf53b98" (UID: "d112a80b-a3db-4671-98a3-21714cf53b98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.048606 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d112a80b-a3db-4671-98a3-21714cf53b98-kube-api-access-d997g" (OuterVolumeSpecName: "kube-api-access-d997g") pod "d112a80b-a3db-4671-98a3-21714cf53b98" (UID: "d112a80b-a3db-4671-98a3-21714cf53b98"). InnerVolumeSpecName "kube-api-access-d997g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.101564 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d112a80b-a3db-4671-98a3-21714cf53b98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d112a80b-a3db-4671-98a3-21714cf53b98" (UID: "d112a80b-a3db-4671-98a3-21714cf53b98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.144003 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d112a80b-a3db-4671-98a3-21714cf53b98-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.144349 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d112a80b-a3db-4671-98a3-21714cf53b98-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.144436 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d997g\" (UniqueName: \"kubernetes.io/projected/d112a80b-a3db-4671-98a3-21714cf53b98-kube-api-access-d997g\") on node \"crc\" DevicePath \"\"" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.327731 4933 generic.go:334] "Generic (PLEG): container finished" podID="d112a80b-a3db-4671-98a3-21714cf53b98" containerID="8a107df15141f0bc10ee41d451ccc2af06a5f91de3b7874d94b05216e83ebc3a" exitCode=0 Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.327813 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knm8k" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.327824 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knm8k" event={"ID":"d112a80b-a3db-4671-98a3-21714cf53b98","Type":"ContainerDied","Data":"8a107df15141f0bc10ee41d451ccc2af06a5f91de3b7874d94b05216e83ebc3a"} Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.328198 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knm8k" event={"ID":"d112a80b-a3db-4671-98a3-21714cf53b98","Type":"ContainerDied","Data":"ce831521344fc33cf41acd81ab64ec648a9ef5bf0593118704dd3d00cd695ba3"} Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.328222 4933 scope.go:117] "RemoveContainer" containerID="8a107df15141f0bc10ee41d451ccc2af06a5f91de3b7874d94b05216e83ebc3a" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.359511 4933 scope.go:117] "RemoveContainer" containerID="a80f0ab921dc9b80f160c36ea7b3c2d4e03d81640a94655cffe7f43ca86f4675" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.366156 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-knm8k"] Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.379117 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-knm8k"] Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.384192 4933 scope.go:117] "RemoveContainer" containerID="66111c4ec1c92a72ae471af14941dfa50afd528c4b10e5f370cd4378394318a3" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.449235 4933 scope.go:117] "RemoveContainer" containerID="8a107df15141f0bc10ee41d451ccc2af06a5f91de3b7874d94b05216e83ebc3a" Dec 01 09:57:45 crc kubenswrapper[4933]: E1201 09:57:45.449832 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a107df15141f0bc10ee41d451ccc2af06a5f91de3b7874d94b05216e83ebc3a\": container with ID starting with 8a107df15141f0bc10ee41d451ccc2af06a5f91de3b7874d94b05216e83ebc3a not found: ID does not exist" containerID="8a107df15141f0bc10ee41d451ccc2af06a5f91de3b7874d94b05216e83ebc3a" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.449886 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a107df15141f0bc10ee41d451ccc2af06a5f91de3b7874d94b05216e83ebc3a"} err="failed to get container status \"8a107df15141f0bc10ee41d451ccc2af06a5f91de3b7874d94b05216e83ebc3a\": rpc error: code = NotFound desc = could not find container \"8a107df15141f0bc10ee41d451ccc2af06a5f91de3b7874d94b05216e83ebc3a\": container with ID starting with 8a107df15141f0bc10ee41d451ccc2af06a5f91de3b7874d94b05216e83ebc3a not found: ID does not exist" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.449916 4933 scope.go:117] "RemoveContainer" containerID="a80f0ab921dc9b80f160c36ea7b3c2d4e03d81640a94655cffe7f43ca86f4675" Dec 01 09:57:45 crc kubenswrapper[4933]: E1201 09:57:45.450233 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a80f0ab921dc9b80f160c36ea7b3c2d4e03d81640a94655cffe7f43ca86f4675\": container with ID starting with a80f0ab921dc9b80f160c36ea7b3c2d4e03d81640a94655cffe7f43ca86f4675 not found: ID does not exist" containerID="a80f0ab921dc9b80f160c36ea7b3c2d4e03d81640a94655cffe7f43ca86f4675" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.450268 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a80f0ab921dc9b80f160c36ea7b3c2d4e03d81640a94655cffe7f43ca86f4675"} err="failed to get container status \"a80f0ab921dc9b80f160c36ea7b3c2d4e03d81640a94655cffe7f43ca86f4675\": rpc error: code = NotFound desc = could not find container \"a80f0ab921dc9b80f160c36ea7b3c2d4e03d81640a94655cffe7f43ca86f4675\": container with ID starting with a80f0ab921dc9b80f160c36ea7b3c2d4e03d81640a94655cffe7f43ca86f4675 not found: ID does not exist" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.450291 4933 scope.go:117] "RemoveContainer" containerID="66111c4ec1c92a72ae471af14941dfa50afd528c4b10e5f370cd4378394318a3" Dec 01 09:57:45 crc kubenswrapper[4933]: E1201 09:57:45.450569 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66111c4ec1c92a72ae471af14941dfa50afd528c4b10e5f370cd4378394318a3\": container with ID starting with 66111c4ec1c92a72ae471af14941dfa50afd528c4b10e5f370cd4378394318a3 not found: ID does not exist" containerID="66111c4ec1c92a72ae471af14941dfa50afd528c4b10e5f370cd4378394318a3" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.450596 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66111c4ec1c92a72ae471af14941dfa50afd528c4b10e5f370cd4378394318a3"} err="failed to get container status \"66111c4ec1c92a72ae471af14941dfa50afd528c4b10e5f370cd4378394318a3\": rpc error: code = NotFound desc = could not find container \"66111c4ec1c92a72ae471af14941dfa50afd528c4b10e5f370cd4378394318a3\": container with ID starting with 66111c4ec1c92a72ae471af14941dfa50afd528c4b10e5f370cd4378394318a3 not found: ID does not exist" Dec 01 09:57:45 crc kubenswrapper[4933]: I1201 09:57:45.678395 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d112a80b-a3db-4671-98a3-21714cf53b98" path="/var/lib/kubelet/pods/d112a80b-a3db-4671-98a3-21714cf53b98/volumes" Dec 01 09:57:52 crc kubenswrapper[4933]: I1201 09:57:52.392011 4933 scope.go:117] "RemoveContainer" containerID="ec39fafe40cb1124fe562c39af12965099e3767a85c6ad02a63597c859ff1df6" Dec 01 09:58:41 crc kubenswrapper[4933]: I1201 09:58:41.741451 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:58:41 crc kubenswrapper[4933]: I1201 09:58:41.742159 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:58:52 crc kubenswrapper[4933]: I1201 09:58:52.513024 4933 scope.go:117] "RemoveContainer" containerID="99f40a6ee75597adad7b98717363d9b62e3ecd901d04bedd1b2d0feb02bb0b77" Dec 01 09:59:11 crc kubenswrapper[4933]: I1201 09:59:11.740872 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:59:11 crc kubenswrapper[4933]: I1201 09:59:11.741530 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:59:41 crc kubenswrapper[4933]: I1201 09:59:41.741371 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:59:41 crc kubenswrapper[4933]: I1201 09:59:41.741984 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:59:41 crc kubenswrapper[4933]: I1201 09:59:41.742035 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 09:59:41 crc kubenswrapper[4933]: I1201 09:59:41.742568 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:59:41 crc kubenswrapper[4933]: I1201 09:59:41.742626 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" gracePeriod=600 Dec 01 09:59:41 crc kubenswrapper[4933]: E1201 09:59:41.905995 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 09:59:42 crc kubenswrapper[4933]: I1201 09:59:42.612464 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" exitCode=0 Dec 01 09:59:42 crc kubenswrapper[4933]: I1201 09:59:42.612527 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417"} Dec 01 09:59:42 crc kubenswrapper[4933]: I1201 09:59:42.612912 4933 scope.go:117] "RemoveContainer" containerID="8505b08ab0c6a32f9d9b3cdadd9a40ce10f6aaa716925824a170840b097c0cb7" Dec 01 09:59:42 crc kubenswrapper[4933]: I1201 09:59:42.613749 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 09:59:42 crc kubenswrapper[4933]: E1201 09:59:42.615001 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 09:59:55 crc kubenswrapper[4933]: I1201 09:59:55.668100 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 09:59:55 crc kubenswrapper[4933]: E1201 09:59:55.668969 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.151516 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj"] Dec 01 10:00:00 crc kubenswrapper[4933]: E1201 10:00:00.155272 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" containerName="extract-utilities" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.155329 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" containerName="extract-utilities" Dec 01 10:00:00 crc kubenswrapper[4933]: E1201 10:00:00.155351 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" containerName="extract-content" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.155361 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" containerName="extract-content" Dec 01 10:00:00 crc kubenswrapper[4933]: E1201 10:00:00.155374 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d112a80b-a3db-4671-98a3-21714cf53b98" containerName="extract-utilities" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.155382 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d112a80b-a3db-4671-98a3-21714cf53b98" containerName="extract-utilities" Dec 01 10:00:00 crc kubenswrapper[4933]: E1201 10:00:00.155394 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d112a80b-a3db-4671-98a3-21714cf53b98" containerName="extract-content" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.155401 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d112a80b-a3db-4671-98a3-21714cf53b98" containerName="extract-content" Dec 01 10:00:00 crc kubenswrapper[4933]: E1201 10:00:00.155427 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8eb42b-49ff-43b7-9d59-597ff62f0428" containerName="extract-content" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.155435 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8eb42b-49ff-43b7-9d59-597ff62f0428" containerName="extract-content" Dec 01 10:00:00 crc kubenswrapper[4933]: E1201 10:00:00.155447 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8eb42b-49ff-43b7-9d59-597ff62f0428" containerName="registry-server" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.155454 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8eb42b-49ff-43b7-9d59-597ff62f0428" containerName="registry-server" Dec 01 10:00:00 crc kubenswrapper[4933]: E1201 10:00:00.155473 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8eb42b-49ff-43b7-9d59-597ff62f0428" containerName="extract-utilities" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.155480 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8eb42b-49ff-43b7-9d59-597ff62f0428" containerName="extract-utilities" Dec 01 10:00:00 crc kubenswrapper[4933]: E1201 10:00:00.155505 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" containerName="registry-server" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.155513 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" containerName="registry-server" Dec 01 10:00:00 crc kubenswrapper[4933]: E1201 10:00:00.155527 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d112a80b-a3db-4671-98a3-21714cf53b98" containerName="registry-server" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.155535 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d112a80b-a3db-4671-98a3-21714cf53b98" containerName="registry-server" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.155854 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d112a80b-a3db-4671-98a3-21714cf53b98" containerName="registry-server" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.155872 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca28354b-43fe-4a44-8b10-b9e6dfe4dcf8" containerName="registry-server" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.155889 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8eb42b-49ff-43b7-9d59-597ff62f0428" containerName="registry-server" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.156765 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.162370 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.163582 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.180140 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj"] Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.285588 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f18f49-2a43-429f-8406-5177afaeacfd-config-volume\") pod \"collect-profiles-29409720-6wbwj\" (UID: \"a4f18f49-2a43-429f-8406-5177afaeacfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.285682 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f18f49-2a43-429f-8406-5177afaeacfd-secret-volume\") pod \"collect-profiles-29409720-6wbwj\" (UID: \"a4f18f49-2a43-429f-8406-5177afaeacfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.285817 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6zbz\" (UniqueName: \"kubernetes.io/projected/a4f18f49-2a43-429f-8406-5177afaeacfd-kube-api-access-m6zbz\") pod \"collect-profiles-29409720-6wbwj\" (UID: \"a4f18f49-2a43-429f-8406-5177afaeacfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.388451 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f18f49-2a43-429f-8406-5177afaeacfd-config-volume\") pod \"collect-profiles-29409720-6wbwj\" (UID: \"a4f18f49-2a43-429f-8406-5177afaeacfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.388574 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f18f49-2a43-429f-8406-5177afaeacfd-secret-volume\") pod \"collect-profiles-29409720-6wbwj\" (UID: \"a4f18f49-2a43-429f-8406-5177afaeacfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.388698 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6zbz\" (UniqueName: \"kubernetes.io/projected/a4f18f49-2a43-429f-8406-5177afaeacfd-kube-api-access-m6zbz\") pod \"collect-profiles-29409720-6wbwj\" (UID: \"a4f18f49-2a43-429f-8406-5177afaeacfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.389435 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f18f49-2a43-429f-8406-5177afaeacfd-config-volume\") pod \"collect-profiles-29409720-6wbwj\" (UID: \"a4f18f49-2a43-429f-8406-5177afaeacfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.395664 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f18f49-2a43-429f-8406-5177afaeacfd-secret-volume\") pod \"collect-profiles-29409720-6wbwj\" (UID: \"a4f18f49-2a43-429f-8406-5177afaeacfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.411561 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6zbz\" (UniqueName: \"kubernetes.io/projected/a4f18f49-2a43-429f-8406-5177afaeacfd-kube-api-access-m6zbz\") pod \"collect-profiles-29409720-6wbwj\" (UID: \"a4f18f49-2a43-429f-8406-5177afaeacfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.488606 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" Dec 01 10:00:00 crc kubenswrapper[4933]: I1201 10:00:00.971691 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj"] Dec 01 10:00:01 crc kubenswrapper[4933]: I1201 10:00:01.825194 4933 generic.go:334] "Generic (PLEG): container finished" podID="a4f18f49-2a43-429f-8406-5177afaeacfd" containerID="d5603104d3d0719fd7d505e105343c9f19686a898bebe128d02b4f19c5e5f639" exitCode=0 Dec 01 10:00:01 crc kubenswrapper[4933]: I1201 10:00:01.825297 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" event={"ID":"a4f18f49-2a43-429f-8406-5177afaeacfd","Type":"ContainerDied","Data":"d5603104d3d0719fd7d505e105343c9f19686a898bebe128d02b4f19c5e5f639"} Dec 01 10:00:01 crc kubenswrapper[4933]: I1201 10:00:01.825784 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" event={"ID":"a4f18f49-2a43-429f-8406-5177afaeacfd","Type":"ContainerStarted","Data":"71d91d4d05b5465e11e7a55fec41881467e6ce13fe18844045a03444600e401f"} Dec 01 10:00:03 crc kubenswrapper[4933]: I1201 10:00:03.211777 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" Dec 01 10:00:03 crc kubenswrapper[4933]: I1201 10:00:03.361795 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f18f49-2a43-429f-8406-5177afaeacfd-secret-volume\") pod \"a4f18f49-2a43-429f-8406-5177afaeacfd\" (UID: \"a4f18f49-2a43-429f-8406-5177afaeacfd\") " Dec 01 10:00:03 crc kubenswrapper[4933]: I1201 10:00:03.361931 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6zbz\" (UniqueName: \"kubernetes.io/projected/a4f18f49-2a43-429f-8406-5177afaeacfd-kube-api-access-m6zbz\") pod \"a4f18f49-2a43-429f-8406-5177afaeacfd\" (UID: \"a4f18f49-2a43-429f-8406-5177afaeacfd\") " Dec 01 10:00:03 crc kubenswrapper[4933]: I1201 10:00:03.362043 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f18f49-2a43-429f-8406-5177afaeacfd-config-volume\") pod \"a4f18f49-2a43-429f-8406-5177afaeacfd\" (UID: \"a4f18f49-2a43-429f-8406-5177afaeacfd\") " Dec 01 10:00:03 crc kubenswrapper[4933]: I1201 10:00:03.363482 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f18f49-2a43-429f-8406-5177afaeacfd-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4f18f49-2a43-429f-8406-5177afaeacfd" (UID: "a4f18f49-2a43-429f-8406-5177afaeacfd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 10:00:03 crc kubenswrapper[4933]: I1201 10:00:03.370705 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f18f49-2a43-429f-8406-5177afaeacfd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4f18f49-2a43-429f-8406-5177afaeacfd" (UID: "a4f18f49-2a43-429f-8406-5177afaeacfd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:00:03 crc kubenswrapper[4933]: I1201 10:00:03.371713 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f18f49-2a43-429f-8406-5177afaeacfd-kube-api-access-m6zbz" (OuterVolumeSpecName: "kube-api-access-m6zbz") pod "a4f18f49-2a43-429f-8406-5177afaeacfd" (UID: "a4f18f49-2a43-429f-8406-5177afaeacfd"). InnerVolumeSpecName "kube-api-access-m6zbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:00:03 crc kubenswrapper[4933]: I1201 10:00:03.465784 4933 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f18f49-2a43-429f-8406-5177afaeacfd-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 10:00:03 crc kubenswrapper[4933]: I1201 10:00:03.465840 4933 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f18f49-2a43-429f-8406-5177afaeacfd-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 10:00:03 crc kubenswrapper[4933]: I1201 10:00:03.465856 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6zbz\" (UniqueName: \"kubernetes.io/projected/a4f18f49-2a43-429f-8406-5177afaeacfd-kube-api-access-m6zbz\") on node \"crc\" DevicePath \"\"" Dec 01 10:00:03 crc kubenswrapper[4933]: I1201 10:00:03.850748 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" event={"ID":"a4f18f49-2a43-429f-8406-5177afaeacfd","Type":"ContainerDied","Data":"71d91d4d05b5465e11e7a55fec41881467e6ce13fe18844045a03444600e401f"} Dec 01 10:00:03 crc kubenswrapper[4933]: I1201 10:00:03.850801 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71d91d4d05b5465e11e7a55fec41881467e6ce13fe18844045a03444600e401f" Dec 01 10:00:03 crc kubenswrapper[4933]: I1201 10:00:03.851351 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj" Dec 01 10:00:08 crc kubenswrapper[4933]: I1201 10:00:08.668337 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:00:08 crc kubenswrapper[4933]: E1201 10:00:08.669228 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:00:13 crc kubenswrapper[4933]: I1201 10:00:13.944912 4933 generic.go:334] "Generic (PLEG): container finished" podID="32dfd9a4-8242-4931-a791-de1fc8b1d4a9" containerID="4dcb7d2f9bb8bbcb0473980c8d293b7d96f1cb496d49e7d5e4fba9d0eff6d18c" exitCode=0 Dec 01 10:00:13 crc kubenswrapper[4933]: I1201 10:00:13.945021 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" event={"ID":"32dfd9a4-8242-4931-a791-de1fc8b1d4a9","Type":"ContainerDied","Data":"4dcb7d2f9bb8bbcb0473980c8d293b7d96f1cb496d49e7d5e4fba9d0eff6d18c"} Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.055847 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6cf5-account-create-update-shwg2"] Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.066403 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0fd3-account-create-update-cjld8"] Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.088654 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6cf5-account-create-update-shwg2"] Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.100465 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-0fd3-account-create-update-cjld8"] Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.425630 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.519869 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdvxz\" (UniqueName: \"kubernetes.io/projected/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-kube-api-access-rdvxz\") pod \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.519962 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-bootstrap-combined-ca-bundle\") pod \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.519998 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-inventory\") pod \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.520349 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-ssh-key\") pod \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\" (UID: \"32dfd9a4-8242-4931-a791-de1fc8b1d4a9\") " Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.528797 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "32dfd9a4-8242-4931-a791-de1fc8b1d4a9" (UID: "32dfd9a4-8242-4931-a791-de1fc8b1d4a9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.528882 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-kube-api-access-rdvxz" (OuterVolumeSpecName: "kube-api-access-rdvxz") pod "32dfd9a4-8242-4931-a791-de1fc8b1d4a9" (UID: "32dfd9a4-8242-4931-a791-de1fc8b1d4a9"). InnerVolumeSpecName "kube-api-access-rdvxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.556558 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "32dfd9a4-8242-4931-a791-de1fc8b1d4a9" (UID: "32dfd9a4-8242-4931-a791-de1fc8b1d4a9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.567198 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-inventory" (OuterVolumeSpecName: "inventory") pod "32dfd9a4-8242-4931-a791-de1fc8b1d4a9" (UID: "32dfd9a4-8242-4931-a791-de1fc8b1d4a9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.622631 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.622676 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdvxz\" (UniqueName: \"kubernetes.io/projected/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-kube-api-access-rdvxz\") on node \"crc\" DevicePath \"\"" Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.622687 4933 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.622727 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32dfd9a4-8242-4931-a791-de1fc8b1d4a9-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.679458 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49de438f-c2ca-4d52-a9ca-47fb8ef7ec81" path="/var/lib/kubelet/pods/49de438f-c2ca-4d52-a9ca-47fb8ef7ec81/volumes" Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.680168 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346" path="/var/lib/kubelet/pods/5ca0cee0-20d1-4fa4-9ea7-c7c84f0a4346/volumes" Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.966779 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" event={"ID":"32dfd9a4-8242-4931-a791-de1fc8b1d4a9","Type":"ContainerDied","Data":"588c022a2c0edecb4eb91d034fdc4c22650b1143f57ce6810388d78703ab9aef"} Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.967524 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd" Dec 01 10:00:15 crc kubenswrapper[4933]: I1201 10:00:15.967620 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="588c022a2c0edecb4eb91d034fdc4c22650b1143f57ce6810388d78703ab9aef" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.046971 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5xz5w"] Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.059011 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-25mh8"] Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.071496 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7ef6-account-create-update-g9stz"] Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.086766 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-25mh8"] Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.106483 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7ef6-account-create-update-g9stz"] Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.132141 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5xz5w"] Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.147414 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh"] Dec 01 10:00:16 crc kubenswrapper[4933]: E1201 10:00:16.148029 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32dfd9a4-8242-4931-a791-de1fc8b1d4a9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.148050 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="32dfd9a4-8242-4931-a791-de1fc8b1d4a9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 01 10:00:16 crc kubenswrapper[4933]: E1201 10:00:16.148067 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f18f49-2a43-429f-8406-5177afaeacfd" containerName="collect-profiles" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.148074 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f18f49-2a43-429f-8406-5177afaeacfd" containerName="collect-profiles" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.148280 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f18f49-2a43-429f-8406-5177afaeacfd" containerName="collect-profiles" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.148320 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="32dfd9a4-8242-4931-a791-de1fc8b1d4a9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.149052 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.151711 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.152349 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.153892 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.155692 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.159626 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh"] Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.338823 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85720139-3c78-4370-98fc-31899c778fd9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m52lh\" (UID: \"85720139-3c78-4370-98fc-31899c778fd9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.338933 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85720139-3c78-4370-98fc-31899c778fd9-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m52lh\" (UID: \"85720139-3c78-4370-98fc-31899c778fd9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.338980 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4rmm\" (UniqueName: \"kubernetes.io/projected/85720139-3c78-4370-98fc-31899c778fd9-kube-api-access-m4rmm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m52lh\" (UID: \"85720139-3c78-4370-98fc-31899c778fd9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.441527 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85720139-3c78-4370-98fc-31899c778fd9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m52lh\" (UID: \"85720139-3c78-4370-98fc-31899c778fd9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.441972 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85720139-3c78-4370-98fc-31899c778fd9-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m52lh\" (UID: \"85720139-3c78-4370-98fc-31899c778fd9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.442103 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4rmm\" (UniqueName: \"kubernetes.io/projected/85720139-3c78-4370-98fc-31899c778fd9-kube-api-access-m4rmm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m52lh\" (UID: \"85720139-3c78-4370-98fc-31899c778fd9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.448956 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85720139-3c78-4370-98fc-31899c778fd9-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m52lh\" (UID: \"85720139-3c78-4370-98fc-31899c778fd9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.449003 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85720139-3c78-4370-98fc-31899c778fd9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m52lh\" (UID: \"85720139-3c78-4370-98fc-31899c778fd9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.461850 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4rmm\" (UniqueName: \"kubernetes.io/projected/85720139-3c78-4370-98fc-31899c778fd9-kube-api-access-m4rmm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m52lh\" (UID: \"85720139-3c78-4370-98fc-31899c778fd9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" Dec 01 10:00:16 crc kubenswrapper[4933]: I1201 10:00:16.470084 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" Dec 01 10:00:17 crc kubenswrapper[4933]: I1201 10:00:17.032162 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh"] Dec 01 10:00:17 crc kubenswrapper[4933]: I1201 10:00:17.040119 4933 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 10:00:17 crc kubenswrapper[4933]: I1201 10:00:17.679753 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="031657e9-4699-435e-a8e2-0e2442a10dd0" path="/var/lib/kubelet/pods/031657e9-4699-435e-a8e2-0e2442a10dd0/volumes" Dec 01 10:00:17 crc kubenswrapper[4933]: I1201 10:00:17.681841 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61e85825-be78-4eba-9c52-3649968d0390" path="/var/lib/kubelet/pods/61e85825-be78-4eba-9c52-3649968d0390/volumes" Dec 01 10:00:17 crc kubenswrapper[4933]: I1201 10:00:17.682660 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81eeda37-3cd9-4518-8561-9b414ec377e7" path="/var/lib/kubelet/pods/81eeda37-3cd9-4518-8561-9b414ec377e7/volumes" Dec 01 10:00:17 crc kubenswrapper[4933]: I1201 10:00:17.990052 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" event={"ID":"85720139-3c78-4370-98fc-31899c778fd9","Type":"ContainerStarted","Data":"1b5beefe2d8dc01b430a7c353a6abd0311295a1198cbc7119f727bb97479dc3a"} Dec 01 10:00:17 crc kubenswrapper[4933]: I1201 10:00:17.990643 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" event={"ID":"85720139-3c78-4370-98fc-31899c778fd9","Type":"ContainerStarted","Data":"dec5f03d9a6e15e5a58491802dec2b5c9490c3a263dad73c3418d4557a78750f"} Dec 01 10:00:18 crc kubenswrapper[4933]: I1201 10:00:18.016854 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" podStartSLOduration=1.502417372 podStartE2EDuration="2.016816217s" podCreationTimestamp="2025-12-01 10:00:16 +0000 UTC" firstStartedPulling="2025-12-01 10:00:17.039817503 +0000 UTC m=+1707.681541118" lastFinishedPulling="2025-12-01 10:00:17.554216348 +0000 UTC m=+1708.195939963" observedRunningTime="2025-12-01 10:00:18.00804179 +0000 UTC m=+1708.649765405" watchObservedRunningTime="2025-12-01 10:00:18.016816217 +0000 UTC m=+1708.658539832" Dec 01 10:00:20 crc kubenswrapper[4933]: I1201 10:00:20.668171 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:00:20 crc kubenswrapper[4933]: E1201 10:00:20.668968 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:00:26 crc kubenswrapper[4933]: I1201 10:00:26.047216 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fq449"] Dec 01 10:00:26 crc kubenswrapper[4933]: I1201 10:00:26.057133 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fq449"] Dec 01 10:00:27 crc kubenswrapper[4933]: I1201 10:00:27.681935 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f441a07b-df6a-4d0b-b8d5-fec3698ff0d4" path="/var/lib/kubelet/pods/f441a07b-df6a-4d0b-b8d5-fec3698ff0d4/volumes" Dec 01 10:00:34 crc kubenswrapper[4933]: I1201 10:00:34.667496 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:00:34 crc kubenswrapper[4933]: E1201 10:00:34.668570 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:00:48 crc kubenswrapper[4933]: I1201 10:00:48.668585 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:00:48 crc kubenswrapper[4933]: E1201 10:00:48.669801 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:00:52 crc kubenswrapper[4933]: I1201 10:00:52.597805 4933 scope.go:117] "RemoveContainer" containerID="2d616d91c201353db3642b292ecc5cd8140bcce146fce6646e58509494e34a0c" Dec 01 10:00:52 crc kubenswrapper[4933]: I1201 10:00:52.624699 4933 scope.go:117] "RemoveContainer" containerID="06968cacc0b4753e6639222a2d977d207844f5c87f1532734a82db20f5c18026" Dec 01 10:00:52 crc kubenswrapper[4933]: I1201 10:00:52.679759 4933 scope.go:117] "RemoveContainer" containerID="7dda46422342fa46621cbb66b446a819140f58c7205d495eddf9114f4bad4578" Dec 01 10:00:52 crc kubenswrapper[4933]: I1201 10:00:52.737025 4933 scope.go:117] "RemoveContainer" containerID="f23f8c88c27f25454d94dda25b2e71d9d1e17c174df760b9a08a4f282d143e6b" Dec 01 10:00:52 crc kubenswrapper[4933]: I1201 10:00:52.764607 4933 scope.go:117] "RemoveContainer" containerID="c1356040b71202ddb733101737308a02253053e6aa3f78c22036469f9927234f" Dec 01 10:00:52 crc kubenswrapper[4933]: I1201 10:00:52.808930 4933 scope.go:117] "RemoveContainer" containerID="3a7e4c88cff28bcd149cb1d01fe204c79701912dcebf9c07f68e2a4dff787061" Dec 01 10:00:52 crc kubenswrapper[4933]: I1201 10:00:52.859468 4933 scope.go:117] "RemoveContainer" containerID="343287ae22c6c8430ba36bddf7cad2457d7b37c6427e70f681f59daf4efb35d3" Dec 01 10:00:52 crc kubenswrapper[4933]: I1201 10:00:52.905757 4933 scope.go:117] "RemoveContainer" containerID="ac428dd94534f970a053a9e0930f9c1705565e3c2a2d1f1532742e5a28d8ce68" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.158915 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29409721-9h79w"] Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.161809 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.175538 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29409721-9h79w"] Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.303236 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-config-data\") pod \"keystone-cron-29409721-9h79w\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.303792 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9db2x\" (UniqueName: \"kubernetes.io/projected/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-kube-api-access-9db2x\") pod \"keystone-cron-29409721-9h79w\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.303833 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-combined-ca-bundle\") pod \"keystone-cron-29409721-9h79w\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.303873 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-fernet-keys\") pod \"keystone-cron-29409721-9h79w\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.406484 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-config-data\") pod \"keystone-cron-29409721-9h79w\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.406714 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9db2x\" (UniqueName: \"kubernetes.io/projected/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-kube-api-access-9db2x\") pod \"keystone-cron-29409721-9h79w\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.406777 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-combined-ca-bundle\") pod \"keystone-cron-29409721-9h79w\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.406826 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-fernet-keys\") pod \"keystone-cron-29409721-9h79w\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.414349 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-combined-ca-bundle\") pod \"keystone-cron-29409721-9h79w\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.414770 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-config-data\") pod \"keystone-cron-29409721-9h79w\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.415070 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-fernet-keys\") pod \"keystone-cron-29409721-9h79w\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.427253 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9db2x\" (UniqueName: \"kubernetes.io/projected/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-kube-api-access-9db2x\") pod \"keystone-cron-29409721-9h79w\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.488009 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:00 crc kubenswrapper[4933]: I1201 10:01:00.936164 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29409721-9h79w"] Dec 01 10:01:01 crc kubenswrapper[4933]: I1201 10:01:01.460193 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409721-9h79w" event={"ID":"e4069c3e-e0a1-4aa7-b54d-a040272d3db4","Type":"ContainerStarted","Data":"26aa9943adb3928cd5fe9f1077211a27bb675b8704d067ca2c4c0e3d10656306"} Dec 01 10:01:01 crc kubenswrapper[4933]: I1201 10:01:01.460785 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409721-9h79w" event={"ID":"e4069c3e-e0a1-4aa7-b54d-a040272d3db4","Type":"ContainerStarted","Data":"eeaf42540e04d6f3b47edf15a1095f520b3f091bffe25a57b1f6873cc384e5e1"} Dec 01 10:01:01 crc kubenswrapper[4933]: I1201 10:01:01.480728 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29409721-9h79w" podStartSLOduration=1.4807103160000001 podStartE2EDuration="1.480710316s" podCreationTimestamp="2025-12-01 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 10:01:01.478869151 +0000 UTC m=+1752.120592776" watchObservedRunningTime="2025-12-01 10:01:01.480710316 +0000 UTC m=+1752.122433931" Dec 01 10:01:02 crc kubenswrapper[4933]: I1201 10:01:02.667797 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:01:02 crc kubenswrapper[4933]: E1201 10:01:02.669680 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:01:04 crc kubenswrapper[4933]: I1201 10:01:04.497345 4933 generic.go:334] "Generic (PLEG): container finished" podID="e4069c3e-e0a1-4aa7-b54d-a040272d3db4" containerID="26aa9943adb3928cd5fe9f1077211a27bb675b8704d067ca2c4c0e3d10656306" exitCode=0 Dec 01 10:01:04 crc kubenswrapper[4933]: I1201 10:01:04.497395 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409721-9h79w" event={"ID":"e4069c3e-e0a1-4aa7-b54d-a040272d3db4","Type":"ContainerDied","Data":"26aa9943adb3928cd5fe9f1077211a27bb675b8704d067ca2c4c0e3d10656306"} Dec 01 10:01:05 crc kubenswrapper[4933]: I1201 10:01:05.877067 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.036327 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-combined-ca-bundle\") pod \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.036884 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9db2x\" (UniqueName: \"kubernetes.io/projected/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-kube-api-access-9db2x\") pod \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.037073 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-config-data\") pod \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.037203 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-fernet-keys\") pod \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\" (UID: \"e4069c3e-e0a1-4aa7-b54d-a040272d3db4\") " Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.049664 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-kube-api-access-9db2x" (OuterVolumeSpecName: "kube-api-access-9db2x") pod "e4069c3e-e0a1-4aa7-b54d-a040272d3db4" (UID: "e4069c3e-e0a1-4aa7-b54d-a040272d3db4"). InnerVolumeSpecName "kube-api-access-9db2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.054577 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e4069c3e-e0a1-4aa7-b54d-a040272d3db4" (UID: "e4069c3e-e0a1-4aa7-b54d-a040272d3db4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.089807 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4069c3e-e0a1-4aa7-b54d-a040272d3db4" (UID: "e4069c3e-e0a1-4aa7-b54d-a040272d3db4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.111005 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-config-data" (OuterVolumeSpecName: "config-data") pod "e4069c3e-e0a1-4aa7-b54d-a040272d3db4" (UID: "e4069c3e-e0a1-4aa7-b54d-a040272d3db4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.140025 4933 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.140076 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9db2x\" (UniqueName: \"kubernetes.io/projected/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-kube-api-access-9db2x\") on node \"crc\" DevicePath \"\"" Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.140092 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.140102 4933 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4069c3e-e0a1-4aa7-b54d-a040272d3db4-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.521711 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409721-9h79w" event={"ID":"e4069c3e-e0a1-4aa7-b54d-a040272d3db4","Type":"ContainerDied","Data":"eeaf42540e04d6f3b47edf15a1095f520b3f091bffe25a57b1f6873cc384e5e1"} Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.521761 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeaf42540e04d6f3b47edf15a1095f520b3f091bffe25a57b1f6873cc384e5e1" Dec 01 10:01:06 crc kubenswrapper[4933]: I1201 10:01:06.521867 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409721-9h79w" Dec 01 10:01:09 crc kubenswrapper[4933]: I1201 10:01:09.047801 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-gb654"] Dec 01 10:01:09 crc kubenswrapper[4933]: I1201 10:01:09.057571 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-j9kzb"] Dec 01 10:01:09 crc kubenswrapper[4933]: I1201 10:01:09.067962 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-j9kzb"] Dec 01 10:01:09 crc kubenswrapper[4933]: I1201 10:01:09.079406 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-gb654"] Dec 01 10:01:09 crc kubenswrapper[4933]: I1201 10:01:09.698198 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0de276-0552-49b4-a7ef-ee46a6a07983" path="/var/lib/kubelet/pods/0a0de276-0552-49b4-a7ef-ee46a6a07983/volumes" Dec 01 10:01:09 crc kubenswrapper[4933]: I1201 10:01:09.699357 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c7186d-9c43-48a2-baf8-67143842715e" path="/var/lib/kubelet/pods/67c7186d-9c43-48a2-baf8-67143842715e/volumes" Dec 01 10:01:10 crc kubenswrapper[4933]: I1201 10:01:10.063557 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-e124-account-create-update-jhpxj"] Dec 01 10:01:10 crc kubenswrapper[4933]: I1201 10:01:10.080225 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-158d-account-create-update-27q24"] Dec 01 10:01:10 crc kubenswrapper[4933]: I1201 10:01:10.091993 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-xvxdh"] Dec 01 10:01:10 crc kubenswrapper[4933]: I1201 10:01:10.101356 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-ca4e-account-create-update-sjbgj"] Dec 01 10:01:10 crc kubenswrapper[4933]: I1201 10:01:10.109627 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-e124-account-create-update-jhpxj"] Dec 01 10:01:10 crc kubenswrapper[4933]: I1201 10:01:10.117488 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-158d-account-create-update-27q24"] Dec 01 10:01:10 crc kubenswrapper[4933]: I1201 10:01:10.129242 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-ca4e-account-create-update-sjbgj"] Dec 01 10:01:10 crc kubenswrapper[4933]: I1201 10:01:10.138239 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-xvxdh"] Dec 01 10:01:11 crc kubenswrapper[4933]: I1201 10:01:11.682987 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d82604-29a2-4fa3-8a04-0c0f456dc62b" path="/var/lib/kubelet/pods/19d82604-29a2-4fa3-8a04-0c0f456dc62b/volumes" Dec 01 10:01:11 crc kubenswrapper[4933]: I1201 10:01:11.684441 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c64f8f7-73b4-41da-848d-48951c88da96" path="/var/lib/kubelet/pods/3c64f8f7-73b4-41da-848d-48951c88da96/volumes" Dec 01 10:01:11 crc kubenswrapper[4933]: I1201 10:01:11.685683 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5083456d-b59d-4697-b8b2-c11158ee75fa" path="/var/lib/kubelet/pods/5083456d-b59d-4697-b8b2-c11158ee75fa/volumes" Dec 01 10:01:11 crc kubenswrapper[4933]: I1201 10:01:11.686404 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df" path="/var/lib/kubelet/pods/64bc0b91-4fe2-4f5b-9a3a-8f7ec7c254df/volumes" Dec 01 10:01:13 crc kubenswrapper[4933]: I1201 10:01:13.667497 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:01:13 crc kubenswrapper[4933]: E1201 10:01:13.668191 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:01:15 crc kubenswrapper[4933]: I1201 10:01:15.029013 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4j5nb"] Dec 01 10:01:15 crc kubenswrapper[4933]: I1201 10:01:15.043084 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4j5nb"] Dec 01 10:01:15 crc kubenswrapper[4933]: I1201 10:01:15.679528 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaf6393e-2760-4dc5-8ccf-3079d38a2e87" path="/var/lib/kubelet/pods/aaf6393e-2760-4dc5-8ccf-3079d38a2e87/volumes" Dec 01 10:01:16 crc kubenswrapper[4933]: I1201 10:01:16.032413 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kxbq6"] Dec 01 10:01:16 crc kubenswrapper[4933]: I1201 10:01:16.042868 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kxbq6"] Dec 01 10:01:17 crc kubenswrapper[4933]: I1201 10:01:17.680154 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c54760e-eb3b-4ad1-a6ee-3c494878c668" path="/var/lib/kubelet/pods/3c54760e-eb3b-4ad1-a6ee-3c494878c668/volumes" Dec 01 10:01:25 crc kubenswrapper[4933]: I1201 10:01:25.667961 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:01:25 crc kubenswrapper[4933]: E1201 10:01:25.668781 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:01:40 crc kubenswrapper[4933]: I1201 10:01:40.669150 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:01:40 crc kubenswrapper[4933]: E1201 10:01:40.670096 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.180881 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p6ldj"] Dec 01 10:01:44 crc kubenswrapper[4933]: E1201 10:01:44.181701 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4069c3e-e0a1-4aa7-b54d-a040272d3db4" containerName="keystone-cron" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.181716 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4069c3e-e0a1-4aa7-b54d-a040272d3db4" containerName="keystone-cron" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.181904 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4069c3e-e0a1-4aa7-b54d-a040272d3db4" containerName="keystone-cron" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.183468 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.200373 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6ldj"] Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.324920 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzjmh\" (UniqueName: \"kubernetes.io/projected/d11ee476-c470-44a2-8570-ebc6f5893bdb-kube-api-access-gzjmh\") pod \"redhat-operators-p6ldj\" (UID: \"d11ee476-c470-44a2-8570-ebc6f5893bdb\") " pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.325145 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11ee476-c470-44a2-8570-ebc6f5893bdb-utilities\") pod \"redhat-operators-p6ldj\" (UID: \"d11ee476-c470-44a2-8570-ebc6f5893bdb\") " pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.325164 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11ee476-c470-44a2-8570-ebc6f5893bdb-catalog-content\") pod \"redhat-operators-p6ldj\" (UID: \"d11ee476-c470-44a2-8570-ebc6f5893bdb\") " pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.429192 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11ee476-c470-44a2-8570-ebc6f5893bdb-utilities\") pod \"redhat-operators-p6ldj\" (UID: \"d11ee476-c470-44a2-8570-ebc6f5893bdb\") " pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.429613 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11ee476-c470-44a2-8570-ebc6f5893bdb-catalog-content\") pod \"redhat-operators-p6ldj\" (UID: \"d11ee476-c470-44a2-8570-ebc6f5893bdb\") " pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.429845 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzjmh\" (UniqueName: \"kubernetes.io/projected/d11ee476-c470-44a2-8570-ebc6f5893bdb-kube-api-access-gzjmh\") pod \"redhat-operators-p6ldj\" (UID: \"d11ee476-c470-44a2-8570-ebc6f5893bdb\") " pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.430065 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11ee476-c470-44a2-8570-ebc6f5893bdb-utilities\") pod \"redhat-operators-p6ldj\" (UID: \"d11ee476-c470-44a2-8570-ebc6f5893bdb\") " pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.430482 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11ee476-c470-44a2-8570-ebc6f5893bdb-catalog-content\") pod \"redhat-operators-p6ldj\" (UID: \"d11ee476-c470-44a2-8570-ebc6f5893bdb\") " pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.457114 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzjmh\" (UniqueName: \"kubernetes.io/projected/d11ee476-c470-44a2-8570-ebc6f5893bdb-kube-api-access-gzjmh\") pod \"redhat-operators-p6ldj\" (UID: \"d11ee476-c470-44a2-8570-ebc6f5893bdb\") " pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:01:44 crc kubenswrapper[4933]: I1201 10:01:44.508362 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:01:45 crc kubenswrapper[4933]: I1201 10:01:45.102868 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6ldj"] Dec 01 10:01:45 crc kubenswrapper[4933]: I1201 10:01:45.947654 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6ldj" event={"ID":"d11ee476-c470-44a2-8570-ebc6f5893bdb","Type":"ContainerDied","Data":"1b9872314a2e500f673b29f8b650e7d5e64277f681b7f76057217a9fd1924baf"} Dec 01 10:01:45 crc kubenswrapper[4933]: I1201 10:01:45.947598 4933 generic.go:334] "Generic (PLEG): container finished" podID="d11ee476-c470-44a2-8570-ebc6f5893bdb" containerID="1b9872314a2e500f673b29f8b650e7d5e64277f681b7f76057217a9fd1924baf" exitCode=0 Dec 01 10:01:45 crc kubenswrapper[4933]: I1201 10:01:45.948119 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6ldj" event={"ID":"d11ee476-c470-44a2-8570-ebc6f5893bdb","Type":"ContainerStarted","Data":"ed64d8e1469ef13ab09be35dc7d8c369fdb22899b75e30b7264e8a24da5ba3ae"} Dec 01 10:01:49 crc kubenswrapper[4933]: I1201 10:01:49.060905 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-ltlmj"] Dec 01 10:01:49 crc kubenswrapper[4933]: I1201 10:01:49.076382 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-ltlmj"] Dec 01 10:01:49 crc kubenswrapper[4933]: I1201 10:01:49.681078 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4531da4-441c-4003-9f20-719853edb0b4" path="/var/lib/kubelet/pods/c4531da4-441c-4003-9f20-719853edb0b4/volumes" Dec 01 10:01:52 crc kubenswrapper[4933]: I1201 10:01:52.667789 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:01:52 crc kubenswrapper[4933]: E1201 10:01:52.669061 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:01:53 crc kubenswrapper[4933]: I1201 10:01:53.050090 4933 scope.go:117] "RemoveContainer" containerID="0649f1c6942f179c383337224226162a54f82fdbc301ffc50fafa75a579626ee" Dec 01 10:01:56 crc kubenswrapper[4933]: I1201 10:01:56.978260 4933 scope.go:117] "RemoveContainer" containerID="48351fb94199d69c4deec1dafedeb440220f533f734824feec8368cff225885c" Dec 01 10:01:57 crc kubenswrapper[4933]: I1201 10:01:57.017174 4933 scope.go:117] "RemoveContainer" containerID="e56de64b8e858c1e0e2fb440271398280e46ce422fe2dfb45e2a37773bdf974e" Dec 01 10:01:57 crc kubenswrapper[4933]: I1201 10:01:57.153018 4933 scope.go:117] "RemoveContainer" containerID="92d551697ed02edbcbee2a7268a083d10ec87b94bf5cee3ee8c7b30c5f053cb8" Dec 01 10:01:57 crc kubenswrapper[4933]: I1201 10:01:57.217018 4933 scope.go:117] "RemoveContainer" containerID="99662f468e975adb9b1fc1368306f487fe8b8df5d9a50c9e5697c58c44d3e6b3" Dec 01 10:01:57 crc kubenswrapper[4933]: I1201 10:01:57.309939 4933 scope.go:117] "RemoveContainer" containerID="d10fdbcba439e53f21982e442ee2b24ae602f161378d0673960fa352fff593fd" Dec 01 10:01:57 crc kubenswrapper[4933]: I1201 10:01:57.347854 4933 scope.go:117] "RemoveContainer" containerID="dc4904d7279d4a2d91ff4304dd1ee59268b823111ef072b5f8edac3493f64589" Dec 01 10:01:57 crc kubenswrapper[4933]: I1201 10:01:57.378854 4933 scope.go:117] "RemoveContainer" containerID="bc2ded9eac3aa46c05ac5e11fb2d0a4ca60d881c0c72604f9a5445d716136467" Dec 01 10:01:57 crc kubenswrapper[4933]: I1201 10:01:57.434375 4933 scope.go:117] "RemoveContainer" containerID="2539d04a02ccfdebb2af26f48d9c89f1ed297f931ac0804f4ad11692f0128239" Dec 01 10:01:58 crc kubenswrapper[4933]: I1201 10:01:58.155452 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6ldj" event={"ID":"d11ee476-c470-44a2-8570-ebc6f5893bdb","Type":"ContainerStarted","Data":"173d0379e7bcd42383b9aa25e92103444c2a6f5f81df61539c7457fa5fe9f447"} Dec 01 10:01:59 crc kubenswrapper[4933]: I1201 10:01:59.038999 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8lwh6"] Dec 01 10:01:59 crc kubenswrapper[4933]: I1201 10:01:59.054022 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8lwh6"] Dec 01 10:01:59 crc kubenswrapper[4933]: I1201 10:01:59.690093 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7853c81a-e365-473f-a4a7-4fcc87f625cd" path="/var/lib/kubelet/pods/7853c81a-e365-473f-a4a7-4fcc87f625cd/volumes" Dec 01 10:02:03 crc kubenswrapper[4933]: I1201 10:02:03.214118 4933 generic.go:334] "Generic (PLEG): container finished" podID="d11ee476-c470-44a2-8570-ebc6f5893bdb" containerID="173d0379e7bcd42383b9aa25e92103444c2a6f5f81df61539c7457fa5fe9f447" exitCode=0 Dec 01 10:02:03 crc kubenswrapper[4933]: I1201 10:02:03.214236 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6ldj" event={"ID":"d11ee476-c470-44a2-8570-ebc6f5893bdb","Type":"ContainerDied","Data":"173d0379e7bcd42383b9aa25e92103444c2a6f5f81df61539c7457fa5fe9f447"} Dec 01 10:02:06 crc kubenswrapper[4933]: I1201 10:02:06.667706 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:02:06 crc kubenswrapper[4933]: E1201 10:02:06.668808 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:02:07 crc kubenswrapper[4933]: I1201 10:02:07.042388 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cstm4"] Dec 01 10:02:07 crc kubenswrapper[4933]: I1201 10:02:07.053708 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cstm4"] Dec 01 10:02:07 crc kubenswrapper[4933]: I1201 10:02:07.685824 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="321a4d39-7ce4-4385-a6f5-5204da92683b" path="/var/lib/kubelet/pods/321a4d39-7ce4-4385-a6f5-5204da92683b/volumes" Dec 01 10:02:08 crc kubenswrapper[4933]: I1201 10:02:08.280723 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6ldj" event={"ID":"d11ee476-c470-44a2-8570-ebc6f5893bdb","Type":"ContainerStarted","Data":"5c18450c4a829469afa65382e31f06905b92fd129250cc074a8f2589e55bc401"} Dec 01 10:02:08 crc kubenswrapper[4933]: I1201 10:02:08.313927 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p6ldj" podStartSLOduration=3.188337487 podStartE2EDuration="24.31389687s" podCreationTimestamp="2025-12-01 10:01:44 +0000 UTC" firstStartedPulling="2025-12-01 10:01:45.947947132 +0000 UTC m=+1796.589670747" lastFinishedPulling="2025-12-01 10:02:07.073506515 +0000 UTC m=+1817.715230130" observedRunningTime="2025-12-01 10:02:08.305585235 +0000 UTC m=+1818.947308850" watchObservedRunningTime="2025-12-01 10:02:08.31389687 +0000 UTC m=+1818.955620485" Dec 01 10:02:13 crc kubenswrapper[4933]: I1201 10:02:13.337150 4933 generic.go:334] "Generic (PLEG): container finished" podID="85720139-3c78-4370-98fc-31899c778fd9" containerID="1b5beefe2d8dc01b430a7c353a6abd0311295a1198cbc7119f727bb97479dc3a" exitCode=0 Dec 01 10:02:13 crc kubenswrapper[4933]: I1201 10:02:13.337416 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" event={"ID":"85720139-3c78-4370-98fc-31899c778fd9","Type":"ContainerDied","Data":"1b5beefe2d8dc01b430a7c353a6abd0311295a1198cbc7119f727bb97479dc3a"} Dec 01 10:02:14 crc kubenswrapper[4933]: I1201 10:02:14.510101 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:02:14 crc kubenswrapper[4933]: I1201 10:02:14.510549 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:02:14 crc kubenswrapper[4933]: I1201 10:02:14.602636 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.178860 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.334971 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4rmm\" (UniqueName: \"kubernetes.io/projected/85720139-3c78-4370-98fc-31899c778fd9-kube-api-access-m4rmm\") pod \"85720139-3c78-4370-98fc-31899c778fd9\" (UID: \"85720139-3c78-4370-98fc-31899c778fd9\") " Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.335077 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85720139-3c78-4370-98fc-31899c778fd9-ssh-key\") pod \"85720139-3c78-4370-98fc-31899c778fd9\" (UID: \"85720139-3c78-4370-98fc-31899c778fd9\") " Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.335355 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85720139-3c78-4370-98fc-31899c778fd9-inventory\") pod \"85720139-3c78-4370-98fc-31899c778fd9\" (UID: \"85720139-3c78-4370-98fc-31899c778fd9\") " Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.343570 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85720139-3c78-4370-98fc-31899c778fd9-kube-api-access-m4rmm" (OuterVolumeSpecName: "kube-api-access-m4rmm") pod "85720139-3c78-4370-98fc-31899c778fd9" (UID: "85720139-3c78-4370-98fc-31899c778fd9"). InnerVolumeSpecName "kube-api-access-m4rmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.372340 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85720139-3c78-4370-98fc-31899c778fd9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "85720139-3c78-4370-98fc-31899c778fd9" (UID: "85720139-3c78-4370-98fc-31899c778fd9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.373257 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.373334 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m52lh" event={"ID":"85720139-3c78-4370-98fc-31899c778fd9","Type":"ContainerDied","Data":"dec5f03d9a6e15e5a58491802dec2b5c9490c3a263dad73c3418d4557a78750f"} Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.373377 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dec5f03d9a6e15e5a58491802dec2b5c9490c3a263dad73c3418d4557a78750f" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.425027 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85720139-3c78-4370-98fc-31899c778fd9-inventory" (OuterVolumeSpecName: "inventory") pod "85720139-3c78-4370-98fc-31899c778fd9" (UID: "85720139-3c78-4370-98fc-31899c778fd9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.441153 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85720139-3c78-4370-98fc-31899c778fd9-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.441844 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4rmm\" (UniqueName: \"kubernetes.io/projected/85720139-3c78-4370-98fc-31899c778fd9-kube-api-access-m4rmm\") on node \"crc\" DevicePath \"\"" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.441866 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85720139-3c78-4370-98fc-31899c778fd9-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.478012 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.549717 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8"] Dec 01 10:02:15 crc kubenswrapper[4933]: E1201 10:02:15.551516 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85720139-3c78-4370-98fc-31899c778fd9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.551561 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="85720139-3c78-4370-98fc-31899c778fd9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.554034 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="85720139-3c78-4370-98fc-31899c778fd9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.563234 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8"] Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.564576 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.638367 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6ldj"] Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.649605 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4bcbb87-2840-4779-ad5a-9da140a34e9a-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8\" (UID: \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.649793 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4bcbb87-2840-4779-ad5a-9da140a34e9a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8\" (UID: \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.649886 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc6f9\" (UniqueName: \"kubernetes.io/projected/b4bcbb87-2840-4779-ad5a-9da140a34e9a-kube-api-access-gc6f9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8\" (UID: \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.686444 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-959q8"] Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.686839 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-959q8" podUID="8590b477-1f35-4afa-84b9-e96cb2c21535" containerName="registry-server" containerID="cri-o://b2ccebc27e3c0c649e2f8244e3a8567dac7011dc72516432a24e2fa654e0aa56" gracePeriod=2 Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.752419 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4bcbb87-2840-4779-ad5a-9da140a34e9a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8\" (UID: \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.752623 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc6f9\" (UniqueName: \"kubernetes.io/projected/b4bcbb87-2840-4779-ad5a-9da140a34e9a-kube-api-access-gc6f9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8\" (UID: \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.752854 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4bcbb87-2840-4779-ad5a-9da140a34e9a-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8\" (UID: \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.764132 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4bcbb87-2840-4779-ad5a-9da140a34e9a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8\" (UID: \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.769897 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4bcbb87-2840-4779-ad5a-9da140a34e9a-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8\" (UID: \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.775586 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc6f9\" (UniqueName: \"kubernetes.io/projected/b4bcbb87-2840-4779-ad5a-9da140a34e9a-kube-api-access-gc6f9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8\" (UID: \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" Dec 01 10:02:15 crc kubenswrapper[4933]: I1201 10:02:15.905508 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.249140 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-959q8" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.265888 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8590b477-1f35-4afa-84b9-e96cb2c21535-catalog-content\") pod \"8590b477-1f35-4afa-84b9-e96cb2c21535\" (UID: \"8590b477-1f35-4afa-84b9-e96cb2c21535\") " Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.265985 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9nhn\" (UniqueName: \"kubernetes.io/projected/8590b477-1f35-4afa-84b9-e96cb2c21535-kube-api-access-d9nhn\") pod \"8590b477-1f35-4afa-84b9-e96cb2c21535\" (UID: \"8590b477-1f35-4afa-84b9-e96cb2c21535\") " Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.266291 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8590b477-1f35-4afa-84b9-e96cb2c21535-utilities\") pod \"8590b477-1f35-4afa-84b9-e96cb2c21535\" (UID: \"8590b477-1f35-4afa-84b9-e96cb2c21535\") " Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.267043 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8590b477-1f35-4afa-84b9-e96cb2c21535-utilities" (OuterVolumeSpecName: "utilities") pod "8590b477-1f35-4afa-84b9-e96cb2c21535" (UID: "8590b477-1f35-4afa-84b9-e96cb2c21535"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.283197 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8590b477-1f35-4afa-84b9-e96cb2c21535-kube-api-access-d9nhn" (OuterVolumeSpecName: "kube-api-access-d9nhn") pod "8590b477-1f35-4afa-84b9-e96cb2c21535" (UID: "8590b477-1f35-4afa-84b9-e96cb2c21535"). InnerVolumeSpecName "kube-api-access-d9nhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.370022 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9nhn\" (UniqueName: \"kubernetes.io/projected/8590b477-1f35-4afa-84b9-e96cb2c21535-kube-api-access-d9nhn\") on node \"crc\" DevicePath \"\"" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.370072 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8590b477-1f35-4afa-84b9-e96cb2c21535-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.397558 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8590b477-1f35-4afa-84b9-e96cb2c21535-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8590b477-1f35-4afa-84b9-e96cb2c21535" (UID: "8590b477-1f35-4afa-84b9-e96cb2c21535"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.406662 4933 generic.go:334] "Generic (PLEG): container finished" podID="8590b477-1f35-4afa-84b9-e96cb2c21535" containerID="b2ccebc27e3c0c649e2f8244e3a8567dac7011dc72516432a24e2fa654e0aa56" exitCode=0 Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.406775 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-959q8" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.406798 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959q8" event={"ID":"8590b477-1f35-4afa-84b9-e96cb2c21535","Type":"ContainerDied","Data":"b2ccebc27e3c0c649e2f8244e3a8567dac7011dc72516432a24e2fa654e0aa56"} Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.406893 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959q8" event={"ID":"8590b477-1f35-4afa-84b9-e96cb2c21535","Type":"ContainerDied","Data":"a4520e6618aaac6b7e064d1aae18a06f1ba781e8f7efc4c5af8f38c7817eb910"} Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.406931 4933 scope.go:117] "RemoveContainer" containerID="b2ccebc27e3c0c649e2f8244e3a8567dac7011dc72516432a24e2fa654e0aa56" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.454627 4933 scope.go:117] "RemoveContainer" containerID="9d08083e4aaa229994c48ba6fa61febabb756a1fbf7140bcacafbebb9d2df846" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.459823 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-959q8"] Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.472653 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8590b477-1f35-4afa-84b9-e96cb2c21535-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.489411 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-959q8"] Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.501286 4933 scope.go:117] "RemoveContainer" containerID="71d2f7a52dda5ce47eaf914f01faac591ac43a2d3e6ece0af5bda0a1388dc23a" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.548488 4933 scope.go:117] "RemoveContainer" containerID="b2ccebc27e3c0c649e2f8244e3a8567dac7011dc72516432a24e2fa654e0aa56" Dec 01 10:02:16 crc kubenswrapper[4933]: E1201 10:02:16.549971 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ccebc27e3c0c649e2f8244e3a8567dac7011dc72516432a24e2fa654e0aa56\": container with ID starting with b2ccebc27e3c0c649e2f8244e3a8567dac7011dc72516432a24e2fa654e0aa56 not found: ID does not exist" containerID="b2ccebc27e3c0c649e2f8244e3a8567dac7011dc72516432a24e2fa654e0aa56" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.550060 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ccebc27e3c0c649e2f8244e3a8567dac7011dc72516432a24e2fa654e0aa56"} err="failed to get container status \"b2ccebc27e3c0c649e2f8244e3a8567dac7011dc72516432a24e2fa654e0aa56\": rpc error: code = NotFound desc = could not find container \"b2ccebc27e3c0c649e2f8244e3a8567dac7011dc72516432a24e2fa654e0aa56\": container with ID starting with b2ccebc27e3c0c649e2f8244e3a8567dac7011dc72516432a24e2fa654e0aa56 not found: ID does not exist" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.550118 4933 scope.go:117] "RemoveContainer" containerID="9d08083e4aaa229994c48ba6fa61febabb756a1fbf7140bcacafbebb9d2df846" Dec 01 10:02:16 crc kubenswrapper[4933]: E1201 10:02:16.551491 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d08083e4aaa229994c48ba6fa61febabb756a1fbf7140bcacafbebb9d2df846\": container with ID starting with 9d08083e4aaa229994c48ba6fa61febabb756a1fbf7140bcacafbebb9d2df846 not found: ID does not exist" containerID="9d08083e4aaa229994c48ba6fa61febabb756a1fbf7140bcacafbebb9d2df846" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.551562 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d08083e4aaa229994c48ba6fa61febabb756a1fbf7140bcacafbebb9d2df846"} err="failed to get container status \"9d08083e4aaa229994c48ba6fa61febabb756a1fbf7140bcacafbebb9d2df846\": rpc error: code = NotFound desc = could not find container \"9d08083e4aaa229994c48ba6fa61febabb756a1fbf7140bcacafbebb9d2df846\": container with ID starting with 9d08083e4aaa229994c48ba6fa61febabb756a1fbf7140bcacafbebb9d2df846 not found: ID does not exist" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.551609 4933 scope.go:117] "RemoveContainer" containerID="71d2f7a52dda5ce47eaf914f01faac591ac43a2d3e6ece0af5bda0a1388dc23a" Dec 01 10:02:16 crc kubenswrapper[4933]: E1201 10:02:16.552124 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71d2f7a52dda5ce47eaf914f01faac591ac43a2d3e6ece0af5bda0a1388dc23a\": container with ID starting with 71d2f7a52dda5ce47eaf914f01faac591ac43a2d3e6ece0af5bda0a1388dc23a not found: ID does not exist" containerID="71d2f7a52dda5ce47eaf914f01faac591ac43a2d3e6ece0af5bda0a1388dc23a" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.552191 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d2f7a52dda5ce47eaf914f01faac591ac43a2d3e6ece0af5bda0a1388dc23a"} err="failed to get container status \"71d2f7a52dda5ce47eaf914f01faac591ac43a2d3e6ece0af5bda0a1388dc23a\": rpc error: code = NotFound desc = could not find container \"71d2f7a52dda5ce47eaf914f01faac591ac43a2d3e6ece0af5bda0a1388dc23a\": container with ID starting with 71d2f7a52dda5ce47eaf914f01faac591ac43a2d3e6ece0af5bda0a1388dc23a not found: ID does not exist" Dec 01 10:02:16 crc kubenswrapper[4933]: I1201 10:02:16.653101 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8"] Dec 01 10:02:17 crc kubenswrapper[4933]: I1201 10:02:17.422348 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" event={"ID":"b4bcbb87-2840-4779-ad5a-9da140a34e9a","Type":"ContainerStarted","Data":"d7867a1d61d39dddbe7977596b3a8e5b74187dd7b69610e76147940520ba7836"} Dec 01 10:02:17 crc kubenswrapper[4933]: I1201 10:02:17.668713 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:02:17 crc kubenswrapper[4933]: E1201 10:02:17.669509 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:02:17 crc kubenswrapper[4933]: I1201 10:02:17.682987 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8590b477-1f35-4afa-84b9-e96cb2c21535" path="/var/lib/kubelet/pods/8590b477-1f35-4afa-84b9-e96cb2c21535/volumes" Dec 01 10:02:19 crc kubenswrapper[4933]: I1201 10:02:19.454264 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" event={"ID":"b4bcbb87-2840-4779-ad5a-9da140a34e9a","Type":"ContainerStarted","Data":"d91b51c5d37a1021e1cfcb64072a4609a66af36745f531f09b61ce6ecd9e7445"} Dec 01 10:02:21 crc kubenswrapper[4933]: I1201 10:02:21.243491 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" podStartSLOduration=4.743776705 podStartE2EDuration="6.243461368s" podCreationTimestamp="2025-12-01 10:02:15 +0000 UTC" firstStartedPulling="2025-12-01 10:02:16.657920696 +0000 UTC m=+1827.299644311" lastFinishedPulling="2025-12-01 10:02:18.157605359 +0000 UTC m=+1828.799328974" observedRunningTime="2025-12-01 10:02:19.494294066 +0000 UTC m=+1830.136017681" watchObservedRunningTime="2025-12-01 10:02:21.243461368 +0000 UTC m=+1831.885184983" Dec 01 10:02:21 crc kubenswrapper[4933]: I1201 10:02:21.250143 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-qw69v"] Dec 01 10:02:21 crc kubenswrapper[4933]: I1201 10:02:21.260729 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-qw69v"] Dec 01 10:02:21 crc kubenswrapper[4933]: I1201 10:02:21.685399 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82f89d96-ceb5-4012-9273-68d00cb0780b" path="/var/lib/kubelet/pods/82f89d96-ceb5-4012-9273-68d00cb0780b/volumes" Dec 01 10:02:23 crc kubenswrapper[4933]: I1201 10:02:23.040989 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-n76m8"] Dec 01 10:02:23 crc kubenswrapper[4933]: I1201 10:02:23.052440 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-n76m8"] Dec 01 10:02:23 crc kubenswrapper[4933]: I1201 10:02:23.682433 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d3e5dc3-470a-4fa0-b17c-733457329c79" path="/var/lib/kubelet/pods/4d3e5dc3-470a-4fa0-b17c-733457329c79/volumes" Dec 01 10:02:32 crc kubenswrapper[4933]: I1201 10:02:32.667969 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:02:32 crc kubenswrapper[4933]: E1201 10:02:32.668974 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:02:46 crc kubenswrapper[4933]: I1201 10:02:46.668050 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:02:46 crc kubenswrapper[4933]: E1201 10:02:46.669157 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:02:57 crc kubenswrapper[4933]: I1201 10:02:57.697951 4933 scope.go:117] "RemoveContainer" containerID="34b66823c89e2a4ba58921b9a111e53e35fdc1a2633c0cad1dca542f199791a0" Dec 01 10:02:57 crc kubenswrapper[4933]: I1201 10:02:57.801634 4933 scope.go:117] "RemoveContainer" containerID="3d4c2bbba9101c46a70cff7bd598fcdd9e716f2433c63dc7330090d3f2eaa09c" Dec 01 10:02:57 crc kubenswrapper[4933]: I1201 10:02:57.854647 4933 scope.go:117] "RemoveContainer" containerID="e102516c2e0995cd1f33d56d0b19c9ebd5259d37f1539b1d52bb816fcc61bf83" Dec 01 10:02:57 crc kubenswrapper[4933]: I1201 10:02:57.900769 4933 scope.go:117] "RemoveContainer" containerID="3d701ac010eab170108e72d514195cdeb1835b69da5d15fc2434388456d80f5c" Dec 01 10:02:58 crc kubenswrapper[4933]: I1201 10:02:58.668458 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:02:58 crc kubenswrapper[4933]: E1201 10:02:58.669055 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:03:11 crc kubenswrapper[4933]: I1201 10:03:11.670647 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:03:11 crc kubenswrapper[4933]: E1201 10:03:11.671933 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:03:22 crc kubenswrapper[4933]: I1201 10:03:22.040757 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-792b-account-create-update-ddcxv"] Dec 01 10:03:22 crc kubenswrapper[4933]: I1201 10:03:22.049164 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-792b-account-create-update-ddcxv"] Dec 01 10:03:22 crc kubenswrapper[4933]: I1201 10:03:22.667837 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:03:22 crc kubenswrapper[4933]: E1201 10:03:22.668141 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.035046 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-47e6-account-create-update-mtm6v"] Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.045521 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-47e6-account-create-update-mtm6v"] Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.054112 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-69ffd"] Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.063580 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-525pg"] Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.072295 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-157a-account-create-update-bfxmv"] Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.081708 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-m6pgw"] Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.090759 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-525pg"] Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.099058 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-m6pgw"] Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.106819 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-69ffd"] Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.115169 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-157a-account-create-update-bfxmv"] Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.873719 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36ad6a62-9532-4080-bfad-4f464fa988b0" path="/var/lib/kubelet/pods/36ad6a62-9532-4080-bfad-4f464fa988b0/volumes" Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.875438 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="484151e2-8ea8-4bfa-8f7a-77c86fad9df0" path="/var/lib/kubelet/pods/484151e2-8ea8-4bfa-8f7a-77c86fad9df0/volumes" Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.876188 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d1e236a-dbe4-415d-bae9-945971a11083" path="/var/lib/kubelet/pods/4d1e236a-dbe4-415d-bae9-945971a11083/volumes" Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.876949 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d99846-3d21-461d-91c9-b4f52973fd73" path="/var/lib/kubelet/pods/a6d99846-3d21-461d-91c9-b4f52973fd73/volumes" Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.878388 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15666fa-6b57-4765-98ce-4ffc163e1d49" path="/var/lib/kubelet/pods/c15666fa-6b57-4765-98ce-4ffc163e1d49/volumes" Dec 01 10:03:23 crc kubenswrapper[4933]: I1201 10:03:23.880860 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e4aba6-b83f-42c8-b38c-f5293f898400" path="/var/lib/kubelet/pods/c9e4aba6-b83f-42c8-b38c-f5293f898400/volumes" Dec 01 10:03:36 crc kubenswrapper[4933]: I1201 10:03:36.668054 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:03:36 crc kubenswrapper[4933]: E1201 10:03:36.670054 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:03:39 crc kubenswrapper[4933]: I1201 10:03:39.278785 4933 generic.go:334] "Generic (PLEG): container finished" podID="b4bcbb87-2840-4779-ad5a-9da140a34e9a" containerID="d91b51c5d37a1021e1cfcb64072a4609a66af36745f531f09b61ce6ecd9e7445" exitCode=0 Dec 01 10:03:39 crc kubenswrapper[4933]: I1201 10:03:39.278871 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" event={"ID":"b4bcbb87-2840-4779-ad5a-9da140a34e9a","Type":"ContainerDied","Data":"d91b51c5d37a1021e1cfcb64072a4609a66af36745f531f09b61ce6ecd9e7445"} Dec 01 10:03:40 crc kubenswrapper[4933]: I1201 10:03:40.685016 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" Dec 01 10:03:40 crc kubenswrapper[4933]: I1201 10:03:40.864497 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4bcbb87-2840-4779-ad5a-9da140a34e9a-ssh-key\") pod \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\" (UID: \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\") " Dec 01 10:03:40 crc kubenswrapper[4933]: I1201 10:03:40.864655 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4bcbb87-2840-4779-ad5a-9da140a34e9a-inventory\") pod \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\" (UID: \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\") " Dec 01 10:03:40 crc kubenswrapper[4933]: I1201 10:03:40.864729 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc6f9\" (UniqueName: \"kubernetes.io/projected/b4bcbb87-2840-4779-ad5a-9da140a34e9a-kube-api-access-gc6f9\") pod \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\" (UID: \"b4bcbb87-2840-4779-ad5a-9da140a34e9a\") " Dec 01 10:03:40 crc kubenswrapper[4933]: I1201 10:03:40.873193 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4bcbb87-2840-4779-ad5a-9da140a34e9a-kube-api-access-gc6f9" (OuterVolumeSpecName: "kube-api-access-gc6f9") pod "b4bcbb87-2840-4779-ad5a-9da140a34e9a" (UID: "b4bcbb87-2840-4779-ad5a-9da140a34e9a"). InnerVolumeSpecName "kube-api-access-gc6f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:03:40 crc kubenswrapper[4933]: I1201 10:03:40.895349 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4bcbb87-2840-4779-ad5a-9da140a34e9a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b4bcbb87-2840-4779-ad5a-9da140a34e9a" (UID: "b4bcbb87-2840-4779-ad5a-9da140a34e9a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:03:40 crc kubenswrapper[4933]: I1201 10:03:40.901361 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4bcbb87-2840-4779-ad5a-9da140a34e9a-inventory" (OuterVolumeSpecName: "inventory") pod "b4bcbb87-2840-4779-ad5a-9da140a34e9a" (UID: "b4bcbb87-2840-4779-ad5a-9da140a34e9a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:03:40 crc kubenswrapper[4933]: I1201 10:03:40.967781 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4bcbb87-2840-4779-ad5a-9da140a34e9a-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:03:40 crc kubenswrapper[4933]: I1201 10:03:40.967820 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc6f9\" (UniqueName: \"kubernetes.io/projected/b4bcbb87-2840-4779-ad5a-9da140a34e9a-kube-api-access-gc6f9\") on node \"crc\" DevicePath \"\"" Dec 01 10:03:40 crc kubenswrapper[4933]: I1201 10:03:40.967831 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4bcbb87-2840-4779-ad5a-9da140a34e9a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.299996 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" event={"ID":"b4bcbb87-2840-4779-ad5a-9da140a34e9a","Type":"ContainerDied","Data":"d7867a1d61d39dddbe7977596b3a8e5b74187dd7b69610e76147940520ba7836"} Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.300466 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7867a1d61d39dddbe7977596b3a8e5b74187dd7b69610e76147940520ba7836" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.300074 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.400422 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl"] Dec 01 10:03:41 crc kubenswrapper[4933]: E1201 10:03:41.401007 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4bcbb87-2840-4779-ad5a-9da140a34e9a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.401029 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4bcbb87-2840-4779-ad5a-9da140a34e9a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 01 10:03:41 crc kubenswrapper[4933]: E1201 10:03:41.401044 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8590b477-1f35-4afa-84b9-e96cb2c21535" containerName="extract-content" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.401052 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="8590b477-1f35-4afa-84b9-e96cb2c21535" containerName="extract-content" Dec 01 10:03:41 crc kubenswrapper[4933]: E1201 10:03:41.401074 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8590b477-1f35-4afa-84b9-e96cb2c21535" containerName="extract-utilities" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.401081 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="8590b477-1f35-4afa-84b9-e96cb2c21535" containerName="extract-utilities" Dec 01 10:03:41 crc kubenswrapper[4933]: E1201 10:03:41.401118 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8590b477-1f35-4afa-84b9-e96cb2c21535" containerName="registry-server" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.401124 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="8590b477-1f35-4afa-84b9-e96cb2c21535" containerName="registry-server" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.401362 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="8590b477-1f35-4afa-84b9-e96cb2c21535" containerName="registry-server" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.401386 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4bcbb87-2840-4779-ad5a-9da140a34e9a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.402231 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.408526 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.408624 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.408545 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.408913 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.437658 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl"] Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.582200 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af544a25-e743-4edc-8d80-228c9da3ce45-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl\" (UID: \"af544a25-e743-4edc-8d80-228c9da3ce45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.582656 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af544a25-e743-4edc-8d80-228c9da3ce45-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl\" (UID: \"af544a25-e743-4edc-8d80-228c9da3ce45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.583016 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgcmm\" (UniqueName: \"kubernetes.io/projected/af544a25-e743-4edc-8d80-228c9da3ce45-kube-api-access-vgcmm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl\" (UID: \"af544a25-e743-4edc-8d80-228c9da3ce45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.685761 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af544a25-e743-4edc-8d80-228c9da3ce45-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl\" (UID: \"af544a25-e743-4edc-8d80-228c9da3ce45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.685896 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af544a25-e743-4edc-8d80-228c9da3ce45-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl\" (UID: \"af544a25-e743-4edc-8d80-228c9da3ce45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.686025 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgcmm\" (UniqueName: \"kubernetes.io/projected/af544a25-e743-4edc-8d80-228c9da3ce45-kube-api-access-vgcmm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl\" (UID: \"af544a25-e743-4edc-8d80-228c9da3ce45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.692518 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af544a25-e743-4edc-8d80-228c9da3ce45-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl\" (UID: \"af544a25-e743-4edc-8d80-228c9da3ce45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.692541 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af544a25-e743-4edc-8d80-228c9da3ce45-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl\" (UID: \"af544a25-e743-4edc-8d80-228c9da3ce45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.708470 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgcmm\" (UniqueName: \"kubernetes.io/projected/af544a25-e743-4edc-8d80-228c9da3ce45-kube-api-access-vgcmm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl\" (UID: \"af544a25-e743-4edc-8d80-228c9da3ce45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" Dec 01 10:03:41 crc kubenswrapper[4933]: I1201 10:03:41.726544 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" Dec 01 10:03:42 crc kubenswrapper[4933]: I1201 10:03:42.291181 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl"] Dec 01 10:03:42 crc kubenswrapper[4933]: I1201 10:03:42.313723 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" event={"ID":"af544a25-e743-4edc-8d80-228c9da3ce45","Type":"ContainerStarted","Data":"1cd2461cc664504a8f07a78f9543fffe7b676630e0297added4483bad26d3474"} Dec 01 10:03:43 crc kubenswrapper[4933]: I1201 10:03:43.326204 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" event={"ID":"af544a25-e743-4edc-8d80-228c9da3ce45","Type":"ContainerStarted","Data":"d32b133b0575edbe168820a8b4717fbdc37bc8bc9ff711a0cee4a00d4ec15455"} Dec 01 10:03:48 crc kubenswrapper[4933]: I1201 10:03:48.382479 4933 generic.go:334] "Generic (PLEG): container finished" podID="af544a25-e743-4edc-8d80-228c9da3ce45" containerID="d32b133b0575edbe168820a8b4717fbdc37bc8bc9ff711a0cee4a00d4ec15455" exitCode=0 Dec 01 10:03:48 crc kubenswrapper[4933]: I1201 10:03:48.382520 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" event={"ID":"af544a25-e743-4edc-8d80-228c9da3ce45","Type":"ContainerDied","Data":"d32b133b0575edbe168820a8b4717fbdc37bc8bc9ff711a0cee4a00d4ec15455"} Dec 01 10:03:48 crc kubenswrapper[4933]: I1201 10:03:48.667351 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:03:48 crc kubenswrapper[4933]: E1201 10:03:48.667797 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:03:49 crc kubenswrapper[4933]: I1201 10:03:49.859624 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" Dec 01 10:03:49 crc kubenswrapper[4933]: I1201 10:03:49.890726 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af544a25-e743-4edc-8d80-228c9da3ce45-ssh-key\") pod \"af544a25-e743-4edc-8d80-228c9da3ce45\" (UID: \"af544a25-e743-4edc-8d80-228c9da3ce45\") " Dec 01 10:03:49 crc kubenswrapper[4933]: I1201 10:03:49.890958 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgcmm\" (UniqueName: \"kubernetes.io/projected/af544a25-e743-4edc-8d80-228c9da3ce45-kube-api-access-vgcmm\") pod \"af544a25-e743-4edc-8d80-228c9da3ce45\" (UID: \"af544a25-e743-4edc-8d80-228c9da3ce45\") " Dec 01 10:03:49 crc kubenswrapper[4933]: I1201 10:03:49.891053 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af544a25-e743-4edc-8d80-228c9da3ce45-inventory\") pod \"af544a25-e743-4edc-8d80-228c9da3ce45\" (UID: \"af544a25-e743-4edc-8d80-228c9da3ce45\") " Dec 01 10:03:49 crc kubenswrapper[4933]: I1201 10:03:49.898284 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af544a25-e743-4edc-8d80-228c9da3ce45-kube-api-access-vgcmm" (OuterVolumeSpecName: "kube-api-access-vgcmm") pod "af544a25-e743-4edc-8d80-228c9da3ce45" (UID: "af544a25-e743-4edc-8d80-228c9da3ce45"). InnerVolumeSpecName "kube-api-access-vgcmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:03:49 crc kubenswrapper[4933]: I1201 10:03:49.924215 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af544a25-e743-4edc-8d80-228c9da3ce45-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "af544a25-e743-4edc-8d80-228c9da3ce45" (UID: "af544a25-e743-4edc-8d80-228c9da3ce45"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:03:49 crc kubenswrapper[4933]: I1201 10:03:49.924246 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af544a25-e743-4edc-8d80-228c9da3ce45-inventory" (OuterVolumeSpecName: "inventory") pod "af544a25-e743-4edc-8d80-228c9da3ce45" (UID: "af544a25-e743-4edc-8d80-228c9da3ce45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:03:49 crc kubenswrapper[4933]: I1201 10:03:49.995255 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af544a25-e743-4edc-8d80-228c9da3ce45-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:03:49 crc kubenswrapper[4933]: I1201 10:03:49.995328 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgcmm\" (UniqueName: \"kubernetes.io/projected/af544a25-e743-4edc-8d80-228c9da3ce45-kube-api-access-vgcmm\") on node \"crc\" DevicePath \"\"" Dec 01 10:03:49 crc kubenswrapper[4933]: I1201 10:03:49.995349 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af544a25-e743-4edc-8d80-228c9da3ce45-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.408190 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" event={"ID":"af544a25-e743-4edc-8d80-228c9da3ce45","Type":"ContainerDied","Data":"1cd2461cc664504a8f07a78f9543fffe7b676630e0297added4483bad26d3474"} Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.408251 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cd2461cc664504a8f07a78f9543fffe7b676630e0297added4483bad26d3474" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.408266 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.497162 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq"] Dec 01 10:03:50 crc kubenswrapper[4933]: E1201 10:03:50.497932 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af544a25-e743-4edc-8d80-228c9da3ce45" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.497977 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="af544a25-e743-4edc-8d80-228c9da3ce45" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.498359 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="af544a25-e743-4edc-8d80-228c9da3ce45" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.499281 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.503506 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.503571 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.503781 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.503902 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.511560 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq"] Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.610263 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c212c516-4550-436c-8864-c1ff02cf5b14-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8qwvq\" (UID: \"c212c516-4550-436c-8864-c1ff02cf5b14\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.610350 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xtn4\" (UniqueName: \"kubernetes.io/projected/c212c516-4550-436c-8864-c1ff02cf5b14-kube-api-access-5xtn4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8qwvq\" (UID: \"c212c516-4550-436c-8864-c1ff02cf5b14\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.610390 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c212c516-4550-436c-8864-c1ff02cf5b14-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8qwvq\" (UID: \"c212c516-4550-436c-8864-c1ff02cf5b14\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.713211 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c212c516-4550-436c-8864-c1ff02cf5b14-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8qwvq\" (UID: \"c212c516-4550-436c-8864-c1ff02cf5b14\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.713265 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xtn4\" (UniqueName: \"kubernetes.io/projected/c212c516-4550-436c-8864-c1ff02cf5b14-kube-api-access-5xtn4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8qwvq\" (UID: \"c212c516-4550-436c-8864-c1ff02cf5b14\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.713294 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c212c516-4550-436c-8864-c1ff02cf5b14-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8qwvq\" (UID: \"c212c516-4550-436c-8864-c1ff02cf5b14\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.721263 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c212c516-4550-436c-8864-c1ff02cf5b14-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8qwvq\" (UID: \"c212c516-4550-436c-8864-c1ff02cf5b14\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.731536 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c212c516-4550-436c-8864-c1ff02cf5b14-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8qwvq\" (UID: \"c212c516-4550-436c-8864-c1ff02cf5b14\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.740751 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xtn4\" (UniqueName: \"kubernetes.io/projected/c212c516-4550-436c-8864-c1ff02cf5b14-kube-api-access-5xtn4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8qwvq\" (UID: \"c212c516-4550-436c-8864-c1ff02cf5b14\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" Dec 01 10:03:50 crc kubenswrapper[4933]: I1201 10:03:50.822410 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" Dec 01 10:03:51 crc kubenswrapper[4933]: I1201 10:03:51.392815 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq"] Dec 01 10:03:51 crc kubenswrapper[4933]: I1201 10:03:51.423608 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" event={"ID":"c212c516-4550-436c-8864-c1ff02cf5b14","Type":"ContainerStarted","Data":"ec841faf87eddddd8c9ec1dc77938b390ba8753d2ca9d06b1956c8060e8c1b47"} Dec 01 10:03:53 crc kubenswrapper[4933]: I1201 10:03:53.057279 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xzlnq"] Dec 01 10:03:53 crc kubenswrapper[4933]: I1201 10:03:53.068976 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xzlnq"] Dec 01 10:03:53 crc kubenswrapper[4933]: I1201 10:03:53.446540 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" event={"ID":"c212c516-4550-436c-8864-c1ff02cf5b14","Type":"ContainerStarted","Data":"a25289ce5cd00391b4e4eb02a7d589febc6cac073f35d914b3711abce08143e5"} Dec 01 10:03:53 crc kubenswrapper[4933]: I1201 10:03:53.470003 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" podStartSLOduration=2.299274589 podStartE2EDuration="3.469972626s" podCreationTimestamp="2025-12-01 10:03:50 +0000 UTC" firstStartedPulling="2025-12-01 10:03:51.39994077 +0000 UTC m=+1922.041664385" lastFinishedPulling="2025-12-01 10:03:52.570638807 +0000 UTC m=+1923.212362422" observedRunningTime="2025-12-01 10:03:53.46323457 +0000 UTC m=+1924.104958205" watchObservedRunningTime="2025-12-01 10:03:53.469972626 +0000 UTC m=+1924.111696241" Dec 01 10:03:53 crc kubenswrapper[4933]: I1201 10:03:53.679854 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9427df5f-7233-4b84-b1c7-1567a9b686df" path="/var/lib/kubelet/pods/9427df5f-7233-4b84-b1c7-1567a9b686df/volumes" Dec 01 10:03:58 crc kubenswrapper[4933]: I1201 10:03:58.075729 4933 scope.go:117] "RemoveContainer" containerID="a3131557785e450e30c2fd7bf5e0f491318167e20a23783852cf303a73a03e4d" Dec 01 10:03:58 crc kubenswrapper[4933]: I1201 10:03:58.112829 4933 scope.go:117] "RemoveContainer" containerID="2008d28709e4d35910b136e95a746ebcf6b19cd0952ac1cf45cfe90e36235d13" Dec 01 10:03:58 crc kubenswrapper[4933]: I1201 10:03:58.199108 4933 scope.go:117] "RemoveContainer" containerID="1ddb88de233bc10b0f440f0025d133358f5f80d7e4ffbe5cdbf6c586da45be96" Dec 01 10:03:58 crc kubenswrapper[4933]: I1201 10:03:58.221764 4933 scope.go:117] "RemoveContainer" containerID="08daec3cb872733483f3158656edc725ee1951b66faf08eea56fbe70ca813a2a" Dec 01 10:03:58 crc kubenswrapper[4933]: I1201 10:03:58.302444 4933 scope.go:117] "RemoveContainer" containerID="26e49ca2e47060e6a843f15c74d04322e5caec39ae4e0416a9e352978bb57216" Dec 01 10:03:58 crc kubenswrapper[4933]: I1201 10:03:58.341476 4933 scope.go:117] "RemoveContainer" containerID="9059b2f0e9061395c82a19a92f43e573c7f53f5e3a24a267845c80c70f86fe6b" Dec 01 10:03:58 crc kubenswrapper[4933]: I1201 10:03:58.388161 4933 scope.go:117] "RemoveContainer" containerID="fc451e4f15370c67ad3aa2b8e098c69dc8a5400deda51da59159228ecf92be75" Dec 01 10:04:02 crc kubenswrapper[4933]: I1201 10:04:02.669189 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:04:02 crc kubenswrapper[4933]: E1201 10:04:02.670210 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:04:13 crc kubenswrapper[4933]: I1201 10:04:13.038325 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7mcq4"] Dec 01 10:04:13 crc kubenswrapper[4933]: I1201 10:04:13.050295 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7mcq4"] Dec 01 10:04:13 crc kubenswrapper[4933]: I1201 10:04:13.681675 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd09dcd1-e9a3-40dd-9497-11d652bad925" path="/var/lib/kubelet/pods/cd09dcd1-e9a3-40dd-9497-11d652bad925/volumes" Dec 01 10:04:16 crc kubenswrapper[4933]: I1201 10:04:16.031918 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5tkhr"] Dec 01 10:04:16 crc kubenswrapper[4933]: I1201 10:04:16.043117 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5tkhr"] Dec 01 10:04:16 crc kubenswrapper[4933]: I1201 10:04:16.668112 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:04:16 crc kubenswrapper[4933]: E1201 10:04:16.668431 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:04:17 crc kubenswrapper[4933]: I1201 10:04:17.682910 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a" path="/var/lib/kubelet/pods/dd8e4327-7bd2-49f1-ab79-e8e9eeb82e9a/volumes" Dec 01 10:04:28 crc kubenswrapper[4933]: I1201 10:04:28.668443 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:04:28 crc kubenswrapper[4933]: E1201 10:04:28.669648 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:04:30 crc kubenswrapper[4933]: I1201 10:04:30.836978 4933 generic.go:334] "Generic (PLEG): container finished" podID="c212c516-4550-436c-8864-c1ff02cf5b14" containerID="a25289ce5cd00391b4e4eb02a7d589febc6cac073f35d914b3711abce08143e5" exitCode=0 Dec 01 10:04:30 crc kubenswrapper[4933]: I1201 10:04:30.837056 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" event={"ID":"c212c516-4550-436c-8864-c1ff02cf5b14","Type":"ContainerDied","Data":"a25289ce5cd00391b4e4eb02a7d589febc6cac073f35d914b3711abce08143e5"} Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.304964 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.414795 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c212c516-4550-436c-8864-c1ff02cf5b14-ssh-key\") pod \"c212c516-4550-436c-8864-c1ff02cf5b14\" (UID: \"c212c516-4550-436c-8864-c1ff02cf5b14\") " Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.415069 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xtn4\" (UniqueName: \"kubernetes.io/projected/c212c516-4550-436c-8864-c1ff02cf5b14-kube-api-access-5xtn4\") pod \"c212c516-4550-436c-8864-c1ff02cf5b14\" (UID: \"c212c516-4550-436c-8864-c1ff02cf5b14\") " Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.415196 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c212c516-4550-436c-8864-c1ff02cf5b14-inventory\") pod \"c212c516-4550-436c-8864-c1ff02cf5b14\" (UID: \"c212c516-4550-436c-8864-c1ff02cf5b14\") " Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.422686 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c212c516-4550-436c-8864-c1ff02cf5b14-kube-api-access-5xtn4" (OuterVolumeSpecName: "kube-api-access-5xtn4") pod "c212c516-4550-436c-8864-c1ff02cf5b14" (UID: "c212c516-4550-436c-8864-c1ff02cf5b14"). InnerVolumeSpecName "kube-api-access-5xtn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.448588 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c212c516-4550-436c-8864-c1ff02cf5b14-inventory" (OuterVolumeSpecName: "inventory") pod "c212c516-4550-436c-8864-c1ff02cf5b14" (UID: "c212c516-4550-436c-8864-c1ff02cf5b14"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.453981 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c212c516-4550-436c-8864-c1ff02cf5b14-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c212c516-4550-436c-8864-c1ff02cf5b14" (UID: "c212c516-4550-436c-8864-c1ff02cf5b14"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.517697 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c212c516-4550-436c-8864-c1ff02cf5b14-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.517750 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c212c516-4550-436c-8864-c1ff02cf5b14-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.517768 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xtn4\" (UniqueName: \"kubernetes.io/projected/c212c516-4550-436c-8864-c1ff02cf5b14-kube-api-access-5xtn4\") on node \"crc\" DevicePath \"\"" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.862207 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" event={"ID":"c212c516-4550-436c-8864-c1ff02cf5b14","Type":"ContainerDied","Data":"ec841faf87eddddd8c9ec1dc77938b390ba8753d2ca9d06b1956c8060e8c1b47"} Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.862903 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec841faf87eddddd8c9ec1dc77938b390ba8753d2ca9d06b1956c8060e8c1b47" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.862545 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8qwvq" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.954100 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv"] Dec 01 10:04:32 crc kubenswrapper[4933]: E1201 10:04:32.954539 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c212c516-4550-436c-8864-c1ff02cf5b14" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.954560 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c212c516-4550-436c-8864-c1ff02cf5b14" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.954766 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="c212c516-4550-436c-8864-c1ff02cf5b14" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.955545 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.958093 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.958104 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.958983 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.959037 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:04:32 crc kubenswrapper[4933]: I1201 10:04:32.981892 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv"] Dec 01 10:04:33 crc kubenswrapper[4933]: I1201 10:04:33.030850 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r68nv\" (UID: \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" Dec 01 10:04:33 crc kubenswrapper[4933]: I1201 10:04:33.031414 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bctng\" (UniqueName: \"kubernetes.io/projected/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-kube-api-access-bctng\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r68nv\" (UID: \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" Dec 01 10:04:33 crc kubenswrapper[4933]: I1201 10:04:33.031584 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r68nv\" (UID: \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" Dec 01 10:04:33 crc kubenswrapper[4933]: I1201 10:04:33.133412 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r68nv\" (UID: \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" Dec 01 10:04:33 crc kubenswrapper[4933]: I1201 10:04:33.133625 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bctng\" (UniqueName: \"kubernetes.io/projected/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-kube-api-access-bctng\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r68nv\" (UID: \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" Dec 01 10:04:33 crc kubenswrapper[4933]: I1201 10:04:33.133705 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r68nv\" (UID: \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" Dec 01 10:04:33 crc kubenswrapper[4933]: I1201 10:04:33.137919 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r68nv\" (UID: \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" Dec 01 10:04:33 crc kubenswrapper[4933]: I1201 10:04:33.139531 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r68nv\" (UID: \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" Dec 01 10:04:33 crc kubenswrapper[4933]: I1201 10:04:33.155681 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bctng\" (UniqueName: \"kubernetes.io/projected/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-kube-api-access-bctng\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r68nv\" (UID: \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" Dec 01 10:04:33 crc kubenswrapper[4933]: I1201 10:04:33.276177 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" Dec 01 10:04:33 crc kubenswrapper[4933]: I1201 10:04:33.851104 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv"] Dec 01 10:04:33 crc kubenswrapper[4933]: I1201 10:04:33.871977 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" event={"ID":"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb","Type":"ContainerStarted","Data":"1350abf5af6d1cc2c2584696cdc237acf334893746f0481d531b249585e17e91"} Dec 01 10:04:34 crc kubenswrapper[4933]: I1201 10:04:34.887185 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" event={"ID":"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb","Type":"ContainerStarted","Data":"e5f2dfb677de1a27d508bc55294ab0356f2f6bd3e85e74b631cb58bb15eb914d"} Dec 01 10:04:34 crc kubenswrapper[4933]: I1201 10:04:34.907705 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" podStartSLOduration=2.300084028 podStartE2EDuration="2.90768416s" podCreationTimestamp="2025-12-01 10:04:32 +0000 UTC" firstStartedPulling="2025-12-01 10:04:33.849163987 +0000 UTC m=+1964.490887602" lastFinishedPulling="2025-12-01 10:04:34.456764119 +0000 UTC m=+1965.098487734" observedRunningTime="2025-12-01 10:04:34.904876021 +0000 UTC m=+1965.546599646" watchObservedRunningTime="2025-12-01 10:04:34.90768416 +0000 UTC m=+1965.549407785" Dec 01 10:04:42 crc kubenswrapper[4933]: I1201 10:04:42.668446 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:04:42 crc kubenswrapper[4933]: I1201 10:04:42.958267 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"7a98bf93ed27d22c27782b178fc01678ac5109b9d40b5fc3e17d08873b6a98b6"} Dec 01 10:04:58 crc kubenswrapper[4933]: I1201 10:04:58.559947 4933 scope.go:117] "RemoveContainer" containerID="fbb24a7e5a772eba1990812c40849d9e691917207c4dd782c53e7dbf25c80bb5" Dec 01 10:04:58 crc kubenswrapper[4933]: I1201 10:04:58.618452 4933 scope.go:117] "RemoveContainer" containerID="0bc9570458a3846b9903513456966d5f2d858536902d211d6f3939f143091769" Dec 01 10:04:59 crc kubenswrapper[4933]: I1201 10:04:59.047680 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-ksfbz"] Dec 01 10:04:59 crc kubenswrapper[4933]: I1201 10:04:59.059068 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-ksfbz"] Dec 01 10:04:59 crc kubenswrapper[4933]: I1201 10:04:59.678289 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d94caca0-0445-4841-bbaa-2e070afb5051" path="/var/lib/kubelet/pods/d94caca0-0445-4841-bbaa-2e070afb5051/volumes" Dec 01 10:05:24 crc kubenswrapper[4933]: I1201 10:05:24.402527 4933 generic.go:334] "Generic (PLEG): container finished" podID="5cbc2f4a-039d-45ef-9b06-1e1d59f11abb" containerID="e5f2dfb677de1a27d508bc55294ab0356f2f6bd3e85e74b631cb58bb15eb914d" exitCode=0 Dec 01 10:05:24 crc kubenswrapper[4933]: I1201 10:05:24.402615 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" event={"ID":"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb","Type":"ContainerDied","Data":"e5f2dfb677de1a27d508bc55294ab0356f2f6bd3e85e74b631cb58bb15eb914d"} Dec 01 10:05:25 crc kubenswrapper[4933]: I1201 10:05:25.887078 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.031600 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-inventory\") pod \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\" (UID: \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\") " Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.032202 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-ssh-key\") pod \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\" (UID: \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\") " Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.032492 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bctng\" (UniqueName: \"kubernetes.io/projected/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-kube-api-access-bctng\") pod \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\" (UID: \"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb\") " Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.039356 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-kube-api-access-bctng" (OuterVolumeSpecName: "kube-api-access-bctng") pod "5cbc2f4a-039d-45ef-9b06-1e1d59f11abb" (UID: "5cbc2f4a-039d-45ef-9b06-1e1d59f11abb"). InnerVolumeSpecName "kube-api-access-bctng". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.063679 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-inventory" (OuterVolumeSpecName: "inventory") pod "5cbc2f4a-039d-45ef-9b06-1e1d59f11abb" (UID: "5cbc2f4a-039d-45ef-9b06-1e1d59f11abb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.064274 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5cbc2f4a-039d-45ef-9b06-1e1d59f11abb" (UID: "5cbc2f4a-039d-45ef-9b06-1e1d59f11abb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.135298 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bctng\" (UniqueName: \"kubernetes.io/projected/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-kube-api-access-bctng\") on node \"crc\" DevicePath \"\"" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.135400 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.135411 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cbc2f4a-039d-45ef-9b06-1e1d59f11abb-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.427202 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" event={"ID":"5cbc2f4a-039d-45ef-9b06-1e1d59f11abb","Type":"ContainerDied","Data":"1350abf5af6d1cc2c2584696cdc237acf334893746f0481d531b249585e17e91"} Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.427259 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1350abf5af6d1cc2c2584696cdc237acf334893746f0481d531b249585e17e91" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.427295 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r68nv" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.568094 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gt2rs"] Dec 01 10:05:26 crc kubenswrapper[4933]: E1201 10:05:26.568672 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cbc2f4a-039d-45ef-9b06-1e1d59f11abb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.568696 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cbc2f4a-039d-45ef-9b06-1e1d59f11abb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.568944 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cbc2f4a-039d-45ef-9b06-1e1d59f11abb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.570024 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.573874 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.574491 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.574529 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.575214 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.582869 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gt2rs"] Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.750188 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2735060-c736-46d2-882c-60c0a7e96bc8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gt2rs\" (UID: \"d2735060-c736-46d2-882c-60c0a7e96bc8\") " pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.750341 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7wkt\" (UniqueName: \"kubernetes.io/projected/d2735060-c736-46d2-882c-60c0a7e96bc8-kube-api-access-d7wkt\") pod \"ssh-known-hosts-edpm-deployment-gt2rs\" (UID: \"d2735060-c736-46d2-882c-60c0a7e96bc8\") " pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.750409 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d2735060-c736-46d2-882c-60c0a7e96bc8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gt2rs\" (UID: \"d2735060-c736-46d2-882c-60c0a7e96bc8\") " pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.853208 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7wkt\" (UniqueName: \"kubernetes.io/projected/d2735060-c736-46d2-882c-60c0a7e96bc8-kube-api-access-d7wkt\") pod \"ssh-known-hosts-edpm-deployment-gt2rs\" (UID: \"d2735060-c736-46d2-882c-60c0a7e96bc8\") " pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.853341 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d2735060-c736-46d2-882c-60c0a7e96bc8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gt2rs\" (UID: \"d2735060-c736-46d2-882c-60c0a7e96bc8\") " pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.853423 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2735060-c736-46d2-882c-60c0a7e96bc8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gt2rs\" (UID: \"d2735060-c736-46d2-882c-60c0a7e96bc8\") " pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.859640 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2735060-c736-46d2-882c-60c0a7e96bc8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gt2rs\" (UID: \"d2735060-c736-46d2-882c-60c0a7e96bc8\") " pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.859757 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d2735060-c736-46d2-882c-60c0a7e96bc8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gt2rs\" (UID: \"d2735060-c736-46d2-882c-60c0a7e96bc8\") " pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.874892 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7wkt\" (UniqueName: \"kubernetes.io/projected/d2735060-c736-46d2-882c-60c0a7e96bc8-kube-api-access-d7wkt\") pod \"ssh-known-hosts-edpm-deployment-gt2rs\" (UID: \"d2735060-c736-46d2-882c-60c0a7e96bc8\") " pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" Dec 01 10:05:26 crc kubenswrapper[4933]: I1201 10:05:26.890712 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" Dec 01 10:05:27 crc kubenswrapper[4933]: I1201 10:05:27.484036 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gt2rs"] Dec 01 10:05:27 crc kubenswrapper[4933]: I1201 10:05:27.490806 4933 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 10:05:28 crc kubenswrapper[4933]: I1201 10:05:28.453586 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" event={"ID":"d2735060-c736-46d2-882c-60c0a7e96bc8","Type":"ContainerStarted","Data":"70d51d83d13b89f3bce17bab7a981c46e646037121af5c6cb1f427f93c8933bb"} Dec 01 10:05:28 crc kubenswrapper[4933]: I1201 10:05:28.454038 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" event={"ID":"d2735060-c736-46d2-882c-60c0a7e96bc8","Type":"ContainerStarted","Data":"50f433d07389e87d423e67f4ef4ca92db44af69566b46f86672a009b65953b30"} Dec 01 10:05:28 crc kubenswrapper[4933]: I1201 10:05:28.481751 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" podStartSLOduration=1.9806074649999998 podStartE2EDuration="2.481721683s" podCreationTimestamp="2025-12-01 10:05:26 +0000 UTC" firstStartedPulling="2025-12-01 10:05:27.490416117 +0000 UTC m=+2018.132139742" lastFinishedPulling="2025-12-01 10:05:27.991530345 +0000 UTC m=+2018.633253960" observedRunningTime="2025-12-01 10:05:28.471327487 +0000 UTC m=+2019.113051102" watchObservedRunningTime="2025-12-01 10:05:28.481721683 +0000 UTC m=+2019.123445318" Dec 01 10:05:35 crc kubenswrapper[4933]: I1201 10:05:35.525073 4933 generic.go:334] "Generic (PLEG): container finished" podID="d2735060-c736-46d2-882c-60c0a7e96bc8" containerID="70d51d83d13b89f3bce17bab7a981c46e646037121af5c6cb1f427f93c8933bb" exitCode=0 Dec 01 10:05:35 crc kubenswrapper[4933]: I1201 10:05:35.525142 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" event={"ID":"d2735060-c736-46d2-882c-60c0a7e96bc8","Type":"ContainerDied","Data":"70d51d83d13b89f3bce17bab7a981c46e646037121af5c6cb1f427f93c8933bb"} Dec 01 10:05:36 crc kubenswrapper[4933]: I1201 10:05:36.923614 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.091053 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2735060-c736-46d2-882c-60c0a7e96bc8-ssh-key-openstack-edpm-ipam\") pod \"d2735060-c736-46d2-882c-60c0a7e96bc8\" (UID: \"d2735060-c736-46d2-882c-60c0a7e96bc8\") " Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.091160 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d2735060-c736-46d2-882c-60c0a7e96bc8-inventory-0\") pod \"d2735060-c736-46d2-882c-60c0a7e96bc8\" (UID: \"d2735060-c736-46d2-882c-60c0a7e96bc8\") " Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.091419 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7wkt\" (UniqueName: \"kubernetes.io/projected/d2735060-c736-46d2-882c-60c0a7e96bc8-kube-api-access-d7wkt\") pod \"d2735060-c736-46d2-882c-60c0a7e96bc8\" (UID: \"d2735060-c736-46d2-882c-60c0a7e96bc8\") " Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.100247 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2735060-c736-46d2-882c-60c0a7e96bc8-kube-api-access-d7wkt" (OuterVolumeSpecName: "kube-api-access-d7wkt") pod "d2735060-c736-46d2-882c-60c0a7e96bc8" (UID: "d2735060-c736-46d2-882c-60c0a7e96bc8"). InnerVolumeSpecName "kube-api-access-d7wkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.124588 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2735060-c736-46d2-882c-60c0a7e96bc8-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d2735060-c736-46d2-882c-60c0a7e96bc8" (UID: "d2735060-c736-46d2-882c-60c0a7e96bc8"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.130500 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2735060-c736-46d2-882c-60c0a7e96bc8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d2735060-c736-46d2-882c-60c0a7e96bc8" (UID: "d2735060-c736-46d2-882c-60c0a7e96bc8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.193989 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2735060-c736-46d2-882c-60c0a7e96bc8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.194024 4933 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d2735060-c736-46d2-882c-60c0a7e96bc8-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.194037 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7wkt\" (UniqueName: \"kubernetes.io/projected/d2735060-c736-46d2-882c-60c0a7e96bc8-kube-api-access-d7wkt\") on node \"crc\" DevicePath \"\"" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.547428 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" event={"ID":"d2735060-c736-46d2-882c-60c0a7e96bc8","Type":"ContainerDied","Data":"50f433d07389e87d423e67f4ef4ca92db44af69566b46f86672a009b65953b30"} Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.547481 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50f433d07389e87d423e67f4ef4ca92db44af69566b46f86672a009b65953b30" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.547536 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gt2rs" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.628645 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv"] Dec 01 10:05:37 crc kubenswrapper[4933]: E1201 10:05:37.629074 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2735060-c736-46d2-882c-60c0a7e96bc8" containerName="ssh-known-hosts-edpm-deployment" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.629090 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2735060-c736-46d2-882c-60c0a7e96bc8" containerName="ssh-known-hosts-edpm-deployment" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.629327 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2735060-c736-46d2-882c-60c0a7e96bc8" containerName="ssh-known-hosts-edpm-deployment" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.630097 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.633435 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.633665 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.634196 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.634911 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.650584 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv"] Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.703438 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c57f613c-9cc6-447a-acf6-11a2d381862f-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-78bfv\" (UID: \"c57f613c-9cc6-447a-acf6-11a2d381862f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.703521 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkm8r\" (UniqueName: \"kubernetes.io/projected/c57f613c-9cc6-447a-acf6-11a2d381862f-kube-api-access-bkm8r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-78bfv\" (UID: \"c57f613c-9cc6-447a-acf6-11a2d381862f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.703561 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c57f613c-9cc6-447a-acf6-11a2d381862f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-78bfv\" (UID: \"c57f613c-9cc6-447a-acf6-11a2d381862f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.806491 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c57f613c-9cc6-447a-acf6-11a2d381862f-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-78bfv\" (UID: \"c57f613c-9cc6-447a-acf6-11a2d381862f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.806620 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkm8r\" (UniqueName: \"kubernetes.io/projected/c57f613c-9cc6-447a-acf6-11a2d381862f-kube-api-access-bkm8r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-78bfv\" (UID: \"c57f613c-9cc6-447a-acf6-11a2d381862f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.806676 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c57f613c-9cc6-447a-acf6-11a2d381862f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-78bfv\" (UID: \"c57f613c-9cc6-447a-acf6-11a2d381862f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.812894 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c57f613c-9cc6-447a-acf6-11a2d381862f-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-78bfv\" (UID: \"c57f613c-9cc6-447a-acf6-11a2d381862f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.813071 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c57f613c-9cc6-447a-acf6-11a2d381862f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-78bfv\" (UID: \"c57f613c-9cc6-447a-acf6-11a2d381862f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.827265 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkm8r\" (UniqueName: \"kubernetes.io/projected/c57f613c-9cc6-447a-acf6-11a2d381862f-kube-api-access-bkm8r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-78bfv\" (UID: \"c57f613c-9cc6-447a-acf6-11a2d381862f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" Dec 01 10:05:37 crc kubenswrapper[4933]: I1201 10:05:37.951466 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" Dec 01 10:05:38 crc kubenswrapper[4933]: I1201 10:05:38.527622 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv"] Dec 01 10:05:38 crc kubenswrapper[4933]: I1201 10:05:38.559687 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" event={"ID":"c57f613c-9cc6-447a-acf6-11a2d381862f","Type":"ContainerStarted","Data":"60a36a70c7ee24fec296203b2426de8f846abf90c35d4425e0fb5438a2343b49"} Dec 01 10:05:39 crc kubenswrapper[4933]: I1201 10:05:39.579288 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" event={"ID":"c57f613c-9cc6-447a-acf6-11a2d381862f","Type":"ContainerStarted","Data":"9aef52b40ffbfb5f51dea46edbeb745ee9d03d9466d6f139809d5a568be4f98c"} Dec 01 10:05:47 crc kubenswrapper[4933]: I1201 10:05:47.681528 4933 generic.go:334] "Generic (PLEG): container finished" podID="c57f613c-9cc6-447a-acf6-11a2d381862f" containerID="9aef52b40ffbfb5f51dea46edbeb745ee9d03d9466d6f139809d5a568be4f98c" exitCode=0 Dec 01 10:05:47 crc kubenswrapper[4933]: I1201 10:05:47.681606 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" event={"ID":"c57f613c-9cc6-447a-acf6-11a2d381862f","Type":"ContainerDied","Data":"9aef52b40ffbfb5f51dea46edbeb745ee9d03d9466d6f139809d5a568be4f98c"} Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.139575 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.285001 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c57f613c-9cc6-447a-acf6-11a2d381862f-inventory\") pod \"c57f613c-9cc6-447a-acf6-11a2d381862f\" (UID: \"c57f613c-9cc6-447a-acf6-11a2d381862f\") " Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.285095 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c57f613c-9cc6-447a-acf6-11a2d381862f-ssh-key\") pod \"c57f613c-9cc6-447a-acf6-11a2d381862f\" (UID: \"c57f613c-9cc6-447a-acf6-11a2d381862f\") " Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.285199 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkm8r\" (UniqueName: \"kubernetes.io/projected/c57f613c-9cc6-447a-acf6-11a2d381862f-kube-api-access-bkm8r\") pod \"c57f613c-9cc6-447a-acf6-11a2d381862f\" (UID: \"c57f613c-9cc6-447a-acf6-11a2d381862f\") " Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.294740 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c57f613c-9cc6-447a-acf6-11a2d381862f-kube-api-access-bkm8r" (OuterVolumeSpecName: "kube-api-access-bkm8r") pod "c57f613c-9cc6-447a-acf6-11a2d381862f" (UID: "c57f613c-9cc6-447a-acf6-11a2d381862f"). InnerVolumeSpecName "kube-api-access-bkm8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.325242 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c57f613c-9cc6-447a-acf6-11a2d381862f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c57f613c-9cc6-447a-acf6-11a2d381862f" (UID: "c57f613c-9cc6-447a-acf6-11a2d381862f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.327638 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c57f613c-9cc6-447a-acf6-11a2d381862f-inventory" (OuterVolumeSpecName: "inventory") pod "c57f613c-9cc6-447a-acf6-11a2d381862f" (UID: "c57f613c-9cc6-447a-acf6-11a2d381862f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.389755 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c57f613c-9cc6-447a-acf6-11a2d381862f-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.389812 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c57f613c-9cc6-447a-acf6-11a2d381862f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.389829 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkm8r\" (UniqueName: \"kubernetes.io/projected/c57f613c-9cc6-447a-acf6-11a2d381862f-kube-api-access-bkm8r\") on node \"crc\" DevicePath \"\"" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.701658 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" event={"ID":"c57f613c-9cc6-447a-acf6-11a2d381862f","Type":"ContainerDied","Data":"60a36a70c7ee24fec296203b2426de8f846abf90c35d4425e0fb5438a2343b49"} Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.701699 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60a36a70c7ee24fec296203b2426de8f846abf90c35d4425e0fb5438a2343b49" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.701977 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-78bfv" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.791795 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r"] Dec 01 10:05:49 crc kubenswrapper[4933]: E1201 10:05:49.792726 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57f613c-9cc6-447a-acf6-11a2d381862f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.792775 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57f613c-9cc6-447a-acf6-11a2d381862f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.793096 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="c57f613c-9cc6-447a-acf6-11a2d381862f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.794359 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.796834 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.796929 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.797531 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.801876 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.805512 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r"] Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.900656 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a84779c-7b89-4a0c-9ea0-34d0af08979d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r\" (UID: \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.901197 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq8p4\" (UniqueName: \"kubernetes.io/projected/6a84779c-7b89-4a0c-9ea0-34d0af08979d-kube-api-access-cq8p4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r\" (UID: \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" Dec 01 10:05:49 crc kubenswrapper[4933]: I1201 10:05:49.901729 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a84779c-7b89-4a0c-9ea0-34d0af08979d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r\" (UID: \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" Dec 01 10:05:50 crc kubenswrapper[4933]: I1201 10:05:50.004490 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq8p4\" (UniqueName: \"kubernetes.io/projected/6a84779c-7b89-4a0c-9ea0-34d0af08979d-kube-api-access-cq8p4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r\" (UID: \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" Dec 01 10:05:50 crc kubenswrapper[4933]: I1201 10:05:50.004680 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a84779c-7b89-4a0c-9ea0-34d0af08979d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r\" (UID: \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" Dec 01 10:05:50 crc kubenswrapper[4933]: I1201 10:05:50.004788 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a84779c-7b89-4a0c-9ea0-34d0af08979d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r\" (UID: \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" Dec 01 10:05:50 crc kubenswrapper[4933]: I1201 10:05:50.010650 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a84779c-7b89-4a0c-9ea0-34d0af08979d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r\" (UID: \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" Dec 01 10:05:50 crc kubenswrapper[4933]: I1201 10:05:50.011332 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a84779c-7b89-4a0c-9ea0-34d0af08979d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r\" (UID: \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" Dec 01 10:05:50 crc kubenswrapper[4933]: I1201 10:05:50.024693 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq8p4\" (UniqueName: \"kubernetes.io/projected/6a84779c-7b89-4a0c-9ea0-34d0af08979d-kube-api-access-cq8p4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r\" (UID: \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" Dec 01 10:05:50 crc kubenswrapper[4933]: I1201 10:05:50.116338 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" Dec 01 10:05:50 crc kubenswrapper[4933]: I1201 10:05:50.760845 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r"] Dec 01 10:05:51 crc kubenswrapper[4933]: I1201 10:05:51.735062 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" event={"ID":"6a84779c-7b89-4a0c-9ea0-34d0af08979d","Type":"ContainerStarted","Data":"a425b55ef1636fd90a324d63d52b23b2d3a6c4535d1c181dc7ff4f6ef97b6ec2"} Dec 01 10:05:51 crc kubenswrapper[4933]: I1201 10:05:51.736031 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" event={"ID":"6a84779c-7b89-4a0c-9ea0-34d0af08979d","Type":"ContainerStarted","Data":"475208a30c36576f5345d171b407643f9d863fc2899a52c583be0eb7d3f32812"} Dec 01 10:05:51 crc kubenswrapper[4933]: I1201 10:05:51.764686 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" podStartSLOduration=2.323048017 podStartE2EDuration="2.764650889s" podCreationTimestamp="2025-12-01 10:05:49 +0000 UTC" firstStartedPulling="2025-12-01 10:05:50.7736827 +0000 UTC m=+2041.415406315" lastFinishedPulling="2025-12-01 10:05:51.215285572 +0000 UTC m=+2041.857009187" observedRunningTime="2025-12-01 10:05:51.762029554 +0000 UTC m=+2042.403753189" watchObservedRunningTime="2025-12-01 10:05:51.764650889 +0000 UTC m=+2042.406374504" Dec 01 10:05:58 crc kubenswrapper[4933]: I1201 10:05:58.735468 4933 scope.go:117] "RemoveContainer" containerID="4e386786018e593ebe1e4a7bc445bb8c6ae257aa2ef4957730a36d787f7eee6b" Dec 01 10:06:02 crc kubenswrapper[4933]: I1201 10:06:02.861132 4933 generic.go:334] "Generic (PLEG): container finished" podID="6a84779c-7b89-4a0c-9ea0-34d0af08979d" containerID="a425b55ef1636fd90a324d63d52b23b2d3a6c4535d1c181dc7ff4f6ef97b6ec2" exitCode=0 Dec 01 10:06:02 crc kubenswrapper[4933]: I1201 10:06:02.861276 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" event={"ID":"6a84779c-7b89-4a0c-9ea0-34d0af08979d","Type":"ContainerDied","Data":"a425b55ef1636fd90a324d63d52b23b2d3a6c4535d1c181dc7ff4f6ef97b6ec2"} Dec 01 10:06:04 crc kubenswrapper[4933]: I1201 10:06:04.422026 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" Dec 01 10:06:04 crc kubenswrapper[4933]: I1201 10:06:04.578775 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq8p4\" (UniqueName: \"kubernetes.io/projected/6a84779c-7b89-4a0c-9ea0-34d0af08979d-kube-api-access-cq8p4\") pod \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\" (UID: \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\") " Dec 01 10:06:04 crc kubenswrapper[4933]: I1201 10:06:04.578896 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a84779c-7b89-4a0c-9ea0-34d0af08979d-inventory\") pod \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\" (UID: \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\") " Dec 01 10:06:04 crc kubenswrapper[4933]: I1201 10:06:04.579054 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a84779c-7b89-4a0c-9ea0-34d0af08979d-ssh-key\") pod \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\" (UID: \"6a84779c-7b89-4a0c-9ea0-34d0af08979d\") " Dec 01 10:06:04 crc kubenswrapper[4933]: I1201 10:06:04.590093 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a84779c-7b89-4a0c-9ea0-34d0af08979d-kube-api-access-cq8p4" (OuterVolumeSpecName: "kube-api-access-cq8p4") pod "6a84779c-7b89-4a0c-9ea0-34d0af08979d" (UID: "6a84779c-7b89-4a0c-9ea0-34d0af08979d"). InnerVolumeSpecName "kube-api-access-cq8p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:06:04 crc kubenswrapper[4933]: I1201 10:06:04.620704 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a84779c-7b89-4a0c-9ea0-34d0af08979d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6a84779c-7b89-4a0c-9ea0-34d0af08979d" (UID: "6a84779c-7b89-4a0c-9ea0-34d0af08979d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:06:04 crc kubenswrapper[4933]: I1201 10:06:04.625058 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a84779c-7b89-4a0c-9ea0-34d0af08979d-inventory" (OuterVolumeSpecName: "inventory") pod "6a84779c-7b89-4a0c-9ea0-34d0af08979d" (UID: "6a84779c-7b89-4a0c-9ea0-34d0af08979d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:06:04 crc kubenswrapper[4933]: I1201 10:06:04.681823 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq8p4\" (UniqueName: \"kubernetes.io/projected/6a84779c-7b89-4a0c-9ea0-34d0af08979d-kube-api-access-cq8p4\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:04 crc kubenswrapper[4933]: I1201 10:06:04.681862 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a84779c-7b89-4a0c-9ea0-34d0af08979d-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:04 crc kubenswrapper[4933]: I1201 10:06:04.681873 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a84779c-7b89-4a0c-9ea0-34d0af08979d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:04 crc kubenswrapper[4933]: I1201 10:06:04.886546 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" event={"ID":"6a84779c-7b89-4a0c-9ea0-34d0af08979d","Type":"ContainerDied","Data":"475208a30c36576f5345d171b407643f9d863fc2899a52c583be0eb7d3f32812"} Dec 01 10:06:04 crc kubenswrapper[4933]: I1201 10:06:04.886624 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="475208a30c36576f5345d171b407643f9d863fc2899a52c583be0eb7d3f32812" Dec 01 10:06:04 crc kubenswrapper[4933]: I1201 10:06:04.887078 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.002048 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f"] Dec 01 10:06:05 crc kubenswrapper[4933]: E1201 10:06:05.002546 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a84779c-7b89-4a0c-9ea0-34d0af08979d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.002567 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a84779c-7b89-4a0c-9ea0-34d0af08979d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.002797 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a84779c-7b89-4a0c-9ea0-34d0af08979d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.003968 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.007279 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.007908 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.008217 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.008461 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.008828 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.009094 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.009818 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.012693 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.029225 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f"] Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.090211 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.090763 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.090895 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.091002 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.091100 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.091271 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.091444 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.091635 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmsjd\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-kube-api-access-dmsjd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.091757 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.091843 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.092080 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.092217 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.092284 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.092407 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.247938 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.248261 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.248572 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.248986 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.249250 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.249441 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.249519 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.249803 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.249997 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.250208 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.250398 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmsjd\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-kube-api-access-dmsjd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.250534 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.250823 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.250914 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.263289 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.266737 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.267644 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.293708 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.293789 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.294116 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.294755 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.297019 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.297101 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.298592 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.305056 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.305135 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.305964 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmsjd\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-kube-api-access-dmsjd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.306712 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.327029 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:05 crc kubenswrapper[4933]: I1201 10:06:05.925583 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f"] Dec 01 10:06:05 crc kubenswrapper[4933]: W1201 10:06:05.931373 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ddb223b_7a15_4443_8347_19763927dc95.slice/crio-f0000e53c085d6e56bf2c3efa39ec6ac57782e606d629d5db22fd9b90050b135 WatchSource:0}: Error finding container f0000e53c085d6e56bf2c3efa39ec6ac57782e606d629d5db22fd9b90050b135: Status 404 returned error can't find the container with id f0000e53c085d6e56bf2c3efa39ec6ac57782e606d629d5db22fd9b90050b135 Dec 01 10:06:06 crc kubenswrapper[4933]: I1201 10:06:06.908881 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" event={"ID":"4ddb223b-7a15-4443-8347-19763927dc95","Type":"ContainerStarted","Data":"b0f969e0c65ca841884684a4e5e968d61cd408e3f95947c5693be94791dd48df"} Dec 01 10:06:06 crc kubenswrapper[4933]: I1201 10:06:06.909439 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" event={"ID":"4ddb223b-7a15-4443-8347-19763927dc95","Type":"ContainerStarted","Data":"f0000e53c085d6e56bf2c3efa39ec6ac57782e606d629d5db22fd9b90050b135"} Dec 01 10:06:06 crc kubenswrapper[4933]: I1201 10:06:06.938025 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" podStartSLOduration=2.376630319 podStartE2EDuration="2.938003272s" podCreationTimestamp="2025-12-01 10:06:04 +0000 UTC" firstStartedPulling="2025-12-01 10:06:05.935238683 +0000 UTC m=+2056.576962298" lastFinishedPulling="2025-12-01 10:06:06.496611626 +0000 UTC m=+2057.138335251" observedRunningTime="2025-12-01 10:06:06.928974439 +0000 UTC m=+2057.570698074" watchObservedRunningTime="2025-12-01 10:06:06.938003272 +0000 UTC m=+2057.579726887" Dec 01 10:06:46 crc kubenswrapper[4933]: E1201 10:06:46.665967 4933 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ddb223b_7a15_4443_8347_19763927dc95.slice/crio-conmon-b0f969e0c65ca841884684a4e5e968d61cd408e3f95947c5693be94791dd48df.scope\": RecentStats: unable to find data in memory cache]" Dec 01 10:06:47 crc kubenswrapper[4933]: I1201 10:06:47.326223 4933 generic.go:334] "Generic (PLEG): container finished" podID="4ddb223b-7a15-4443-8347-19763927dc95" containerID="b0f969e0c65ca841884684a4e5e968d61cd408e3f95947c5693be94791dd48df" exitCode=0 Dec 01 10:06:47 crc kubenswrapper[4933]: I1201 10:06:47.326264 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" event={"ID":"4ddb223b-7a15-4443-8347-19763927dc95","Type":"ContainerDied","Data":"b0f969e0c65ca841884684a4e5e968d61cd408e3f95947c5693be94791dd48df"} Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.828190 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.960935 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.961024 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-nova-combined-ca-bundle\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.961122 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-ssh-key\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.961243 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-libvirt-combined-ca-bundle\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.961291 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-ovn-default-certs-0\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.961504 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-inventory\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.961546 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-repo-setup-combined-ca-bundle\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.961593 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.961630 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-ovn-combined-ca-bundle\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.961662 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-bootstrap-combined-ca-bundle\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.961726 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmsjd\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-kube-api-access-dmsjd\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.961781 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-neutron-metadata-combined-ca-bundle\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.961821 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-telemetry-combined-ca-bundle\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.961924 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"4ddb223b-7a15-4443-8347-19763927dc95\" (UID: \"4ddb223b-7a15-4443-8347-19763927dc95\") " Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.972122 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.975055 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-kube-api-access-dmsjd" (OuterVolumeSpecName: "kube-api-access-dmsjd") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "kube-api-access-dmsjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.976402 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.976815 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.976829 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.977648 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.977762 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.977706 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.978455 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.980785 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.988167 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:06:48 crc kubenswrapper[4933]: I1201 10:06:48.988256 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.013147 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-inventory" (OuterVolumeSpecName: "inventory") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.019515 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4ddb223b-7a15-4443-8347-19763927dc95" (UID: "4ddb223b-7a15-4443-8347-19763927dc95"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065215 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065268 4933 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065285 4933 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065300 4933 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065331 4933 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065350 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmsjd\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-kube-api-access-dmsjd\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065367 4933 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065386 4933 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065399 4933 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065412 4933 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065428 4933 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065440 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065454 4933 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddb223b-7a15-4443-8347-19763927dc95-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.065468 4933 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4ddb223b-7a15-4443-8347-19763927dc95-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.350417 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" event={"ID":"4ddb223b-7a15-4443-8347-19763927dc95","Type":"ContainerDied","Data":"f0000e53c085d6e56bf2c3efa39ec6ac57782e606d629d5db22fd9b90050b135"} Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.350484 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0000e53c085d6e56bf2c3efa39ec6ac57782e606d629d5db22fd9b90050b135" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.350512 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.476630 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9"] Dec 01 10:06:49 crc kubenswrapper[4933]: E1201 10:06:49.477354 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ddb223b-7a15-4443-8347-19763927dc95" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.477379 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ddb223b-7a15-4443-8347-19763927dc95" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.477709 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ddb223b-7a15-4443-8347-19763927dc95" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.478821 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.482812 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.482940 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.482833 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.483105 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.485937 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.511460 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9"] Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.577763 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.577833 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f053536c-b281-4870-b827-93d59be1fdbd-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.577864 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dw72\" (UniqueName: \"kubernetes.io/projected/f053536c-b281-4870-b827-93d59be1fdbd-kube-api-access-8dw72\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.578023 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.578056 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.682787 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.682873 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f053536c-b281-4870-b827-93d59be1fdbd-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.682918 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dw72\" (UniqueName: \"kubernetes.io/projected/f053536c-b281-4870-b827-93d59be1fdbd-kube-api-access-8dw72\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.683211 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.683254 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.692045 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.692312 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.692489 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.694891 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f053536c-b281-4870-b827-93d59be1fdbd-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.695849 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.697729 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.700295 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.702097 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dw72\" (UniqueName: \"kubernetes.io/projected/f053536c-b281-4870-b827-93d59be1fdbd-kube-api-access-8dw72\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s6cr9\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.822646 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:06:49 crc kubenswrapper[4933]: I1201 10:06:49.829111 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:06:50 crc kubenswrapper[4933]: I1201 10:06:50.569114 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9"] Dec 01 10:06:51 crc kubenswrapper[4933]: I1201 10:06:51.245110 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:06:51 crc kubenswrapper[4933]: I1201 10:06:51.376226 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" event={"ID":"f053536c-b281-4870-b827-93d59be1fdbd","Type":"ContainerStarted","Data":"e09749fe79ebc68b52a5d682fd68a874e169b75fd8fce29ef0faecb34f107d1d"} Dec 01 10:06:52 crc kubenswrapper[4933]: I1201 10:06:52.390815 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" event={"ID":"f053536c-b281-4870-b827-93d59be1fdbd","Type":"ContainerStarted","Data":"ad76f26b38ee24b9538ccef0cdcca2e7eca811cc9c35192b90e49ec7125ce0d0"} Dec 01 10:06:52 crc kubenswrapper[4933]: I1201 10:06:52.416856 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" podStartSLOduration=2.754917374 podStartE2EDuration="3.416823323s" podCreationTimestamp="2025-12-01 10:06:49 +0000 UTC" firstStartedPulling="2025-12-01 10:06:50.57987553 +0000 UTC m=+2101.221599145" lastFinishedPulling="2025-12-01 10:06:51.241781479 +0000 UTC m=+2101.883505094" observedRunningTime="2025-12-01 10:06:52.412006705 +0000 UTC m=+2103.053730320" watchObservedRunningTime="2025-12-01 10:06:52.416823323 +0000 UTC m=+2103.058546938" Dec 01 10:07:11 crc kubenswrapper[4933]: I1201 10:07:11.742146 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:07:11 crc kubenswrapper[4933]: I1201 10:07:11.742810 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:07:41 crc kubenswrapper[4933]: I1201 10:07:41.741507 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:07:41 crc kubenswrapper[4933]: I1201 10:07:41.744573 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:07:56 crc kubenswrapper[4933]: I1201 10:07:56.054893 4933 generic.go:334] "Generic (PLEG): container finished" podID="f053536c-b281-4870-b827-93d59be1fdbd" containerID="ad76f26b38ee24b9538ccef0cdcca2e7eca811cc9c35192b90e49ec7125ce0d0" exitCode=0 Dec 01 10:07:56 crc kubenswrapper[4933]: I1201 10:07:56.055003 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" event={"ID":"f053536c-b281-4870-b827-93d59be1fdbd","Type":"ContainerDied","Data":"ad76f26b38ee24b9538ccef0cdcca2e7eca811cc9c35192b90e49ec7125ce0d0"} Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.569926 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.649154 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f053536c-b281-4870-b827-93d59be1fdbd-ovncontroller-config-0\") pod \"f053536c-b281-4870-b827-93d59be1fdbd\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.649658 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-ssh-key\") pod \"f053536c-b281-4870-b827-93d59be1fdbd\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.649728 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dw72\" (UniqueName: \"kubernetes.io/projected/f053536c-b281-4870-b827-93d59be1fdbd-kube-api-access-8dw72\") pod \"f053536c-b281-4870-b827-93d59be1fdbd\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.649779 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-inventory\") pod \"f053536c-b281-4870-b827-93d59be1fdbd\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.649884 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-ovn-combined-ca-bundle\") pod \"f053536c-b281-4870-b827-93d59be1fdbd\" (UID: \"f053536c-b281-4870-b827-93d59be1fdbd\") " Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.657144 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f053536c-b281-4870-b827-93d59be1fdbd-kube-api-access-8dw72" (OuterVolumeSpecName: "kube-api-access-8dw72") pod "f053536c-b281-4870-b827-93d59be1fdbd" (UID: "f053536c-b281-4870-b827-93d59be1fdbd"). InnerVolumeSpecName "kube-api-access-8dw72". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.657739 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f053536c-b281-4870-b827-93d59be1fdbd" (UID: "f053536c-b281-4870-b827-93d59be1fdbd"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.683360 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f053536c-b281-4870-b827-93d59be1fdbd-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "f053536c-b281-4870-b827-93d59be1fdbd" (UID: "f053536c-b281-4870-b827-93d59be1fdbd"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.690543 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f053536c-b281-4870-b827-93d59be1fdbd" (UID: "f053536c-b281-4870-b827-93d59be1fdbd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.690561 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-inventory" (OuterVolumeSpecName: "inventory") pod "f053536c-b281-4870-b827-93d59be1fdbd" (UID: "f053536c-b281-4870-b827-93d59be1fdbd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.754392 4933 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f053536c-b281-4870-b827-93d59be1fdbd-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.754444 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.754464 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dw72\" (UniqueName: \"kubernetes.io/projected/f053536c-b281-4870-b827-93d59be1fdbd-kube-api-access-8dw72\") on node \"crc\" DevicePath \"\"" Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.754481 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:07:57 crc kubenswrapper[4933]: I1201 10:07:57.754499 4933 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f053536c-b281-4870-b827-93d59be1fdbd-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.080431 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" event={"ID":"f053536c-b281-4870-b827-93d59be1fdbd","Type":"ContainerDied","Data":"e09749fe79ebc68b52a5d682fd68a874e169b75fd8fce29ef0faecb34f107d1d"} Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.080505 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e09749fe79ebc68b52a5d682fd68a874e169b75fd8fce29ef0faecb34f107d1d" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.080859 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s6cr9" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.182423 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq"] Dec 01 10:07:58 crc kubenswrapper[4933]: E1201 10:07:58.182977 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f053536c-b281-4870-b827-93d59be1fdbd" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.183002 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="f053536c-b281-4870-b827-93d59be1fdbd" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.183250 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="f053536c-b281-4870-b827-93d59be1fdbd" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.184254 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.186561 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.186818 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.187046 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.187086 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.188160 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.197642 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.201788 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq"] Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.264217 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.264269 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn54p\" (UniqueName: \"kubernetes.io/projected/5242466a-3061-4db5-b9dd-77f6bff70350-kube-api-access-xn54p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.264322 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.264369 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.264393 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.264493 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.366468 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.366591 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.366852 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.367153 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.368096 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn54p\" (UniqueName: \"kubernetes.io/projected/5242466a-3061-4db5-b9dd-77f6bff70350-kube-api-access-xn54p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.368241 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.370851 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.371416 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.372868 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.373726 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.375170 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.394488 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn54p\" (UniqueName: \"kubernetes.io/projected/5242466a-3061-4db5-b9dd-77f6bff70350-kube-api-access-xn54p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:58 crc kubenswrapper[4933]: I1201 10:07:58.503337 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:07:59 crc kubenswrapper[4933]: I1201 10:07:59.074829 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq"] Dec 01 10:07:59 crc kubenswrapper[4933]: I1201 10:07:59.093758 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" event={"ID":"5242466a-3061-4db5-b9dd-77f6bff70350","Type":"ContainerStarted","Data":"77bc33296bf291c9a70f0e2b34a14d65c6cb07722554e8d3f4c83b23d9c55523"} Dec 01 10:08:01 crc kubenswrapper[4933]: I1201 10:08:01.120162 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" event={"ID":"5242466a-3061-4db5-b9dd-77f6bff70350","Type":"ContainerStarted","Data":"94cecd913c657c9a535786483330f88b709b1be8fc4bb20a034c22ecc3c08b35"} Dec 01 10:08:01 crc kubenswrapper[4933]: I1201 10:08:01.144663 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" podStartSLOduration=2.384151848 podStartE2EDuration="3.144633327s" podCreationTimestamp="2025-12-01 10:07:58 +0000 UTC" firstStartedPulling="2025-12-01 10:07:59.074928487 +0000 UTC m=+2169.716652122" lastFinishedPulling="2025-12-01 10:07:59.835409976 +0000 UTC m=+2170.477133601" observedRunningTime="2025-12-01 10:08:01.139177742 +0000 UTC m=+2171.780901357" watchObservedRunningTime="2025-12-01 10:08:01.144633327 +0000 UTC m=+2171.786356942" Dec 01 10:08:06 crc kubenswrapper[4933]: I1201 10:08:06.202257 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k664m"] Dec 01 10:08:06 crc kubenswrapper[4933]: I1201 10:08:06.206437 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:06 crc kubenswrapper[4933]: I1201 10:08:06.228676 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k664m"] Dec 01 10:08:06 crc kubenswrapper[4933]: I1201 10:08:06.259292 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-utilities\") pod \"community-operators-k664m\" (UID: \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\") " pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:06 crc kubenswrapper[4933]: I1201 10:08:06.259595 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr8st\" (UniqueName: \"kubernetes.io/projected/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-kube-api-access-dr8st\") pod \"community-operators-k664m\" (UID: \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\") " pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:06 crc kubenswrapper[4933]: I1201 10:08:06.259765 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-catalog-content\") pod \"community-operators-k664m\" (UID: \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\") " pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:06 crc kubenswrapper[4933]: I1201 10:08:06.361479 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-utilities\") pod \"community-operators-k664m\" (UID: \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\") " pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:06 crc kubenswrapper[4933]: I1201 10:08:06.361585 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr8st\" (UniqueName: \"kubernetes.io/projected/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-kube-api-access-dr8st\") pod \"community-operators-k664m\" (UID: \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\") " pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:06 crc kubenswrapper[4933]: I1201 10:08:06.361670 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-catalog-content\") pod \"community-operators-k664m\" (UID: \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\") " pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:06 crc kubenswrapper[4933]: I1201 10:08:06.362143 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-utilities\") pod \"community-operators-k664m\" (UID: \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\") " pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:06 crc kubenswrapper[4933]: I1201 10:08:06.362354 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-catalog-content\") pod \"community-operators-k664m\" (UID: \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\") " pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:06 crc kubenswrapper[4933]: I1201 10:08:06.384159 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr8st\" (UniqueName: \"kubernetes.io/projected/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-kube-api-access-dr8st\") pod \"community-operators-k664m\" (UID: \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\") " pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:06 crc kubenswrapper[4933]: I1201 10:08:06.538728 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:07 crc kubenswrapper[4933]: W1201 10:08:07.088474 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48bcbbc8_1034_48f1_8d5b_4c07f1f1526e.slice/crio-2edf63c89ecad01891fb81cb6849242e2121ebdd717866c0e69411345c6b03ed WatchSource:0}: Error finding container 2edf63c89ecad01891fb81cb6849242e2121ebdd717866c0e69411345c6b03ed: Status 404 returned error can't find the container with id 2edf63c89ecad01891fb81cb6849242e2121ebdd717866c0e69411345c6b03ed Dec 01 10:08:07 crc kubenswrapper[4933]: I1201 10:08:07.096495 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k664m"] Dec 01 10:08:07 crc kubenswrapper[4933]: I1201 10:08:07.172615 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k664m" event={"ID":"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e","Type":"ContainerStarted","Data":"2edf63c89ecad01891fb81cb6849242e2121ebdd717866c0e69411345c6b03ed"} Dec 01 10:08:08 crc kubenswrapper[4933]: I1201 10:08:08.182269 4933 generic.go:334] "Generic (PLEG): container finished" podID="48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" containerID="04168fc07e6dd9d62f8d98e8bb0e3491834b035602842b06179c8ab953390f5c" exitCode=0 Dec 01 10:08:08 crc kubenswrapper[4933]: I1201 10:08:08.182344 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k664m" event={"ID":"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e","Type":"ContainerDied","Data":"04168fc07e6dd9d62f8d98e8bb0e3491834b035602842b06179c8ab953390f5c"} Dec 01 10:08:09 crc kubenswrapper[4933]: I1201 10:08:09.197335 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k664m" event={"ID":"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e","Type":"ContainerStarted","Data":"1578f6a20df4db34c6711adc343ebe567405ce9961facc5ec9ba73b10602c5a8"} Dec 01 10:08:10 crc kubenswrapper[4933]: I1201 10:08:10.212986 4933 generic.go:334] "Generic (PLEG): container finished" podID="48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" containerID="1578f6a20df4db34c6711adc343ebe567405ce9961facc5ec9ba73b10602c5a8" exitCode=0 Dec 01 10:08:10 crc kubenswrapper[4933]: I1201 10:08:10.213098 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k664m" event={"ID":"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e","Type":"ContainerDied","Data":"1578f6a20df4db34c6711adc343ebe567405ce9961facc5ec9ba73b10602c5a8"} Dec 01 10:08:11 crc kubenswrapper[4933]: I1201 10:08:11.224657 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k664m" event={"ID":"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e","Type":"ContainerStarted","Data":"818cf14383a9c2a3004129e85e12e675f654ab44ed866c69864537034f04ebd1"} Dec 01 10:08:11 crc kubenswrapper[4933]: I1201 10:08:11.251367 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k664m" podStartSLOduration=2.6689341410000003 podStartE2EDuration="5.251348074s" podCreationTimestamp="2025-12-01 10:08:06 +0000 UTC" firstStartedPulling="2025-12-01 10:08:08.184620997 +0000 UTC m=+2178.826344612" lastFinishedPulling="2025-12-01 10:08:10.76703493 +0000 UTC m=+2181.408758545" observedRunningTime="2025-12-01 10:08:11.243206513 +0000 UTC m=+2181.884930128" watchObservedRunningTime="2025-12-01 10:08:11.251348074 +0000 UTC m=+2181.893071689" Dec 01 10:08:11 crc kubenswrapper[4933]: I1201 10:08:11.741540 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:08:11 crc kubenswrapper[4933]: I1201 10:08:11.741651 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:08:11 crc kubenswrapper[4933]: I1201 10:08:11.741714 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 10:08:11 crc kubenswrapper[4933]: I1201 10:08:11.742544 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a98bf93ed27d22c27782b178fc01678ac5109b9d40b5fc3e17d08873b6a98b6"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 10:08:11 crc kubenswrapper[4933]: I1201 10:08:11.742627 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://7a98bf93ed27d22c27782b178fc01678ac5109b9d40b5fc3e17d08873b6a98b6" gracePeriod=600 Dec 01 10:08:12 crc kubenswrapper[4933]: I1201 10:08:12.235851 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="7a98bf93ed27d22c27782b178fc01678ac5109b9d40b5fc3e17d08873b6a98b6" exitCode=0 Dec 01 10:08:12 crc kubenswrapper[4933]: I1201 10:08:12.236059 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"7a98bf93ed27d22c27782b178fc01678ac5109b9d40b5fc3e17d08873b6a98b6"} Dec 01 10:08:12 crc kubenswrapper[4933]: I1201 10:08:12.236235 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907"} Dec 01 10:08:12 crc kubenswrapper[4933]: I1201 10:08:12.236263 4933 scope.go:117] "RemoveContainer" containerID="c4dd3b7af253506bad3a2c236ccf9af354d82b8bb3bd615cdc0cced09c19c417" Dec 01 10:08:16 crc kubenswrapper[4933]: I1201 10:08:16.539811 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:16 crc kubenswrapper[4933]: I1201 10:08:16.540787 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:16 crc kubenswrapper[4933]: I1201 10:08:16.601186 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:17 crc kubenswrapper[4933]: I1201 10:08:17.338965 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:17 crc kubenswrapper[4933]: I1201 10:08:17.400841 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k664m"] Dec 01 10:08:19 crc kubenswrapper[4933]: I1201 10:08:19.304260 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k664m" podUID="48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" containerName="registry-server" containerID="cri-o://818cf14383a9c2a3004129e85e12e675f654ab44ed866c69864537034f04ebd1" gracePeriod=2 Dec 01 10:08:19 crc kubenswrapper[4933]: I1201 10:08:19.796702 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:19 crc kubenswrapper[4933]: I1201 10:08:19.883925 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-catalog-content\") pod \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\" (UID: \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\") " Dec 01 10:08:19 crc kubenswrapper[4933]: I1201 10:08:19.884077 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr8st\" (UniqueName: \"kubernetes.io/projected/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-kube-api-access-dr8st\") pod \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\" (UID: \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\") " Dec 01 10:08:19 crc kubenswrapper[4933]: I1201 10:08:19.884234 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-utilities\") pod \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\" (UID: \"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e\") " Dec 01 10:08:19 crc kubenswrapper[4933]: I1201 10:08:19.884993 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-utilities" (OuterVolumeSpecName: "utilities") pod "48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" (UID: "48bcbbc8-1034-48f1-8d5b-4c07f1f1526e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:08:19 crc kubenswrapper[4933]: I1201 10:08:19.891186 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-kube-api-access-dr8st" (OuterVolumeSpecName: "kube-api-access-dr8st") pod "48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" (UID: "48bcbbc8-1034-48f1-8d5b-4c07f1f1526e"). InnerVolumeSpecName "kube-api-access-dr8st". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:08:19 crc kubenswrapper[4933]: I1201 10:08:19.936804 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" (UID: "48bcbbc8-1034-48f1-8d5b-4c07f1f1526e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:08:19 crc kubenswrapper[4933]: I1201 10:08:19.986674 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr8st\" (UniqueName: \"kubernetes.io/projected/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-kube-api-access-dr8st\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:19 crc kubenswrapper[4933]: I1201 10:08:19.986733 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:19 crc kubenswrapper[4933]: I1201 10:08:19.986747 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.315446 4933 generic.go:334] "Generic (PLEG): container finished" podID="48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" containerID="818cf14383a9c2a3004129e85e12e675f654ab44ed866c69864537034f04ebd1" exitCode=0 Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.315528 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k664m" Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.315546 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k664m" event={"ID":"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e","Type":"ContainerDied","Data":"818cf14383a9c2a3004129e85e12e675f654ab44ed866c69864537034f04ebd1"} Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.315614 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k664m" event={"ID":"48bcbbc8-1034-48f1-8d5b-4c07f1f1526e","Type":"ContainerDied","Data":"2edf63c89ecad01891fb81cb6849242e2121ebdd717866c0e69411345c6b03ed"} Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.315648 4933 scope.go:117] "RemoveContainer" containerID="818cf14383a9c2a3004129e85e12e675f654ab44ed866c69864537034f04ebd1" Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.346659 4933 scope.go:117] "RemoveContainer" containerID="1578f6a20df4db34c6711adc343ebe567405ce9961facc5ec9ba73b10602c5a8" Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.363819 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k664m"] Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.375773 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k664m"] Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.383731 4933 scope.go:117] "RemoveContainer" containerID="04168fc07e6dd9d62f8d98e8bb0e3491834b035602842b06179c8ab953390f5c" Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.438640 4933 scope.go:117] "RemoveContainer" containerID="818cf14383a9c2a3004129e85e12e675f654ab44ed866c69864537034f04ebd1" Dec 01 10:08:20 crc kubenswrapper[4933]: E1201 10:08:20.439147 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"818cf14383a9c2a3004129e85e12e675f654ab44ed866c69864537034f04ebd1\": container with ID starting with 818cf14383a9c2a3004129e85e12e675f654ab44ed866c69864537034f04ebd1 not found: ID does not exist" containerID="818cf14383a9c2a3004129e85e12e675f654ab44ed866c69864537034f04ebd1" Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.439186 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"818cf14383a9c2a3004129e85e12e675f654ab44ed866c69864537034f04ebd1"} err="failed to get container status \"818cf14383a9c2a3004129e85e12e675f654ab44ed866c69864537034f04ebd1\": rpc error: code = NotFound desc = could not find container \"818cf14383a9c2a3004129e85e12e675f654ab44ed866c69864537034f04ebd1\": container with ID starting with 818cf14383a9c2a3004129e85e12e675f654ab44ed866c69864537034f04ebd1 not found: ID does not exist" Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.439218 4933 scope.go:117] "RemoveContainer" containerID="1578f6a20df4db34c6711adc343ebe567405ce9961facc5ec9ba73b10602c5a8" Dec 01 10:08:20 crc kubenswrapper[4933]: E1201 10:08:20.439509 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1578f6a20df4db34c6711adc343ebe567405ce9961facc5ec9ba73b10602c5a8\": container with ID starting with 1578f6a20df4db34c6711adc343ebe567405ce9961facc5ec9ba73b10602c5a8 not found: ID does not exist" containerID="1578f6a20df4db34c6711adc343ebe567405ce9961facc5ec9ba73b10602c5a8" Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.439543 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1578f6a20df4db34c6711adc343ebe567405ce9961facc5ec9ba73b10602c5a8"} err="failed to get container status \"1578f6a20df4db34c6711adc343ebe567405ce9961facc5ec9ba73b10602c5a8\": rpc error: code = NotFound desc = could not find container \"1578f6a20df4db34c6711adc343ebe567405ce9961facc5ec9ba73b10602c5a8\": container with ID starting with 1578f6a20df4db34c6711adc343ebe567405ce9961facc5ec9ba73b10602c5a8 not found: ID does not exist" Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.439565 4933 scope.go:117] "RemoveContainer" containerID="04168fc07e6dd9d62f8d98e8bb0e3491834b035602842b06179c8ab953390f5c" Dec 01 10:08:20 crc kubenswrapper[4933]: E1201 10:08:20.439811 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04168fc07e6dd9d62f8d98e8bb0e3491834b035602842b06179c8ab953390f5c\": container with ID starting with 04168fc07e6dd9d62f8d98e8bb0e3491834b035602842b06179c8ab953390f5c not found: ID does not exist" containerID="04168fc07e6dd9d62f8d98e8bb0e3491834b035602842b06179c8ab953390f5c" Dec 01 10:08:20 crc kubenswrapper[4933]: I1201 10:08:20.439840 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04168fc07e6dd9d62f8d98e8bb0e3491834b035602842b06179c8ab953390f5c"} err="failed to get container status \"04168fc07e6dd9d62f8d98e8bb0e3491834b035602842b06179c8ab953390f5c\": rpc error: code = NotFound desc = could not find container \"04168fc07e6dd9d62f8d98e8bb0e3491834b035602842b06179c8ab953390f5c\": container with ID starting with 04168fc07e6dd9d62f8d98e8bb0e3491834b035602842b06179c8ab953390f5c not found: ID does not exist" Dec 01 10:08:21 crc kubenswrapper[4933]: I1201 10:08:21.679789 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" path="/var/lib/kubelet/pods/48bcbbc8-1034-48f1-8d5b-4c07f1f1526e/volumes" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.670729 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h2wm2"] Dec 01 10:08:26 crc kubenswrapper[4933]: E1201 10:08:26.671863 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" containerName="registry-server" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.671880 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" containerName="registry-server" Dec 01 10:08:26 crc kubenswrapper[4933]: E1201 10:08:26.671914 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" containerName="extract-content" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.671922 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" containerName="extract-content" Dec 01 10:08:26 crc kubenswrapper[4933]: E1201 10:08:26.671941 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" containerName="extract-utilities" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.671950 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" containerName="extract-utilities" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.672177 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="48bcbbc8-1034-48f1-8d5b-4c07f1f1526e" containerName="registry-server" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.693370 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.732073 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h2wm2"] Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.732343 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6175b642-499b-42f8-acdc-490efb4a64f7-catalog-content\") pod \"certified-operators-h2wm2\" (UID: \"6175b642-499b-42f8-acdc-490efb4a64f7\") " pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.732467 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp6ds\" (UniqueName: \"kubernetes.io/projected/6175b642-499b-42f8-acdc-490efb4a64f7-kube-api-access-mp6ds\") pod \"certified-operators-h2wm2\" (UID: \"6175b642-499b-42f8-acdc-490efb4a64f7\") " pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.732729 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6175b642-499b-42f8-acdc-490efb4a64f7-utilities\") pod \"certified-operators-h2wm2\" (UID: \"6175b642-499b-42f8-acdc-490efb4a64f7\") " pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.835006 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp6ds\" (UniqueName: \"kubernetes.io/projected/6175b642-499b-42f8-acdc-490efb4a64f7-kube-api-access-mp6ds\") pod \"certified-operators-h2wm2\" (UID: \"6175b642-499b-42f8-acdc-490efb4a64f7\") " pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.835051 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6175b642-499b-42f8-acdc-490efb4a64f7-catalog-content\") pod \"certified-operators-h2wm2\" (UID: \"6175b642-499b-42f8-acdc-490efb4a64f7\") " pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.835193 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6175b642-499b-42f8-acdc-490efb4a64f7-utilities\") pod \"certified-operators-h2wm2\" (UID: \"6175b642-499b-42f8-acdc-490efb4a64f7\") " pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.835667 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6175b642-499b-42f8-acdc-490efb4a64f7-utilities\") pod \"certified-operators-h2wm2\" (UID: \"6175b642-499b-42f8-acdc-490efb4a64f7\") " pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.835726 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6175b642-499b-42f8-acdc-490efb4a64f7-catalog-content\") pod \"certified-operators-h2wm2\" (UID: \"6175b642-499b-42f8-acdc-490efb4a64f7\") " pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:26 crc kubenswrapper[4933]: I1201 10:08:26.858116 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp6ds\" (UniqueName: \"kubernetes.io/projected/6175b642-499b-42f8-acdc-490efb4a64f7-kube-api-access-mp6ds\") pod \"certified-operators-h2wm2\" (UID: \"6175b642-499b-42f8-acdc-490efb4a64f7\") " pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.040714 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.464552 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lbwt2"] Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.469970 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.485221 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbwt2"] Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.554769 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-utilities\") pod \"redhat-marketplace-lbwt2\" (UID: \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\") " pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.555128 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-catalog-content\") pod \"redhat-marketplace-lbwt2\" (UID: \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\") " pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.555231 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl857\" (UniqueName: \"kubernetes.io/projected/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-kube-api-access-nl857\") pod \"redhat-marketplace-lbwt2\" (UID: \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\") " pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.647823 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h2wm2"] Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.657423 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-utilities\") pod \"redhat-marketplace-lbwt2\" (UID: \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\") " pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.657670 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-catalog-content\") pod \"redhat-marketplace-lbwt2\" (UID: \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\") " pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.657759 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl857\" (UniqueName: \"kubernetes.io/projected/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-kube-api-access-nl857\") pod \"redhat-marketplace-lbwt2\" (UID: \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\") " pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.657889 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-utilities\") pod \"redhat-marketplace-lbwt2\" (UID: \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\") " pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.658056 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-catalog-content\") pod \"redhat-marketplace-lbwt2\" (UID: \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\") " pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.681801 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl857\" (UniqueName: \"kubernetes.io/projected/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-kube-api-access-nl857\") pod \"redhat-marketplace-lbwt2\" (UID: \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\") " pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:27 crc kubenswrapper[4933]: I1201 10:08:27.799107 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:28 crc kubenswrapper[4933]: I1201 10:08:28.281567 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbwt2"] Dec 01 10:08:28 crc kubenswrapper[4933]: W1201 10:08:28.287223 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e3e7235_ac43_4e4e_add1_b7af2048b4ac.slice/crio-bd93d1e14ffa1d71835187dd208f463bf11d10b6a64fef4629b611359dccb1a0 WatchSource:0}: Error finding container bd93d1e14ffa1d71835187dd208f463bf11d10b6a64fef4629b611359dccb1a0: Status 404 returned error can't find the container with id bd93d1e14ffa1d71835187dd208f463bf11d10b6a64fef4629b611359dccb1a0 Dec 01 10:08:28 crc kubenswrapper[4933]: I1201 10:08:28.409393 4933 generic.go:334] "Generic (PLEG): container finished" podID="6175b642-499b-42f8-acdc-490efb4a64f7" containerID="ef1f75429bc68c536f31444742056c918dd2d9ffc2b1c984fb03c03eab7e100a" exitCode=0 Dec 01 10:08:28 crc kubenswrapper[4933]: I1201 10:08:28.409489 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2wm2" event={"ID":"6175b642-499b-42f8-acdc-490efb4a64f7","Type":"ContainerDied","Data":"ef1f75429bc68c536f31444742056c918dd2d9ffc2b1c984fb03c03eab7e100a"} Dec 01 10:08:28 crc kubenswrapper[4933]: I1201 10:08:28.410237 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2wm2" event={"ID":"6175b642-499b-42f8-acdc-490efb4a64f7","Type":"ContainerStarted","Data":"5319f3cb4532ae5a0b269bb1bbef31964f6c253e5a8210b0ffdc6ba62fd3cb30"} Dec 01 10:08:28 crc kubenswrapper[4933]: I1201 10:08:28.415509 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbwt2" event={"ID":"0e3e7235-ac43-4e4e-add1-b7af2048b4ac","Type":"ContainerStarted","Data":"bd93d1e14ffa1d71835187dd208f463bf11d10b6a64fef4629b611359dccb1a0"} Dec 01 10:08:29 crc kubenswrapper[4933]: I1201 10:08:29.427672 4933 generic.go:334] "Generic (PLEG): container finished" podID="0e3e7235-ac43-4e4e-add1-b7af2048b4ac" containerID="7c6c6c6d6f6b0107f0c4fac4a20da3e673531512391d99621253949a9749a6a2" exitCode=0 Dec 01 10:08:29 crc kubenswrapper[4933]: I1201 10:08:29.427746 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbwt2" event={"ID":"0e3e7235-ac43-4e4e-add1-b7af2048b4ac","Type":"ContainerDied","Data":"7c6c6c6d6f6b0107f0c4fac4a20da3e673531512391d99621253949a9749a6a2"} Dec 01 10:08:29 crc kubenswrapper[4933]: I1201 10:08:29.436093 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2wm2" event={"ID":"6175b642-499b-42f8-acdc-490efb4a64f7","Type":"ContainerStarted","Data":"893c973f98f313a8fc6c7e87e5a51fd338f9b10df9965b4ac830f182d63ba072"} Dec 01 10:08:30 crc kubenswrapper[4933]: I1201 10:08:30.450847 4933 generic.go:334] "Generic (PLEG): container finished" podID="6175b642-499b-42f8-acdc-490efb4a64f7" containerID="893c973f98f313a8fc6c7e87e5a51fd338f9b10df9965b4ac830f182d63ba072" exitCode=0 Dec 01 10:08:30 crc kubenswrapper[4933]: I1201 10:08:30.450957 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2wm2" event={"ID":"6175b642-499b-42f8-acdc-490efb4a64f7","Type":"ContainerDied","Data":"893c973f98f313a8fc6c7e87e5a51fd338f9b10df9965b4ac830f182d63ba072"} Dec 01 10:08:30 crc kubenswrapper[4933]: I1201 10:08:30.454950 4933 generic.go:334] "Generic (PLEG): container finished" podID="0e3e7235-ac43-4e4e-add1-b7af2048b4ac" containerID="3b7baa8c31e00ffd3fa05e3b570f7a7e9568b51f74d6f61d00cfdfbf5f0ddf8a" exitCode=0 Dec 01 10:08:30 crc kubenswrapper[4933]: I1201 10:08:30.455001 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbwt2" event={"ID":"0e3e7235-ac43-4e4e-add1-b7af2048b4ac","Type":"ContainerDied","Data":"3b7baa8c31e00ffd3fa05e3b570f7a7e9568b51f74d6f61d00cfdfbf5f0ddf8a"} Dec 01 10:08:31 crc kubenswrapper[4933]: I1201 10:08:31.483638 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbwt2" event={"ID":"0e3e7235-ac43-4e4e-add1-b7af2048b4ac","Type":"ContainerStarted","Data":"5d4749233df626e9e6ad0f492d63262762c290d7b31151d7778bfaba677a05fd"} Dec 01 10:08:31 crc kubenswrapper[4933]: I1201 10:08:31.487648 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2wm2" event={"ID":"6175b642-499b-42f8-acdc-490efb4a64f7","Type":"ContainerStarted","Data":"e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff"} Dec 01 10:08:31 crc kubenswrapper[4933]: I1201 10:08:31.507436 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lbwt2" podStartSLOduration=3.056002226 podStartE2EDuration="4.507409806s" podCreationTimestamp="2025-12-01 10:08:27 +0000 UTC" firstStartedPulling="2025-12-01 10:08:29.43080681 +0000 UTC m=+2200.072530425" lastFinishedPulling="2025-12-01 10:08:30.88221438 +0000 UTC m=+2201.523938005" observedRunningTime="2025-12-01 10:08:31.501951732 +0000 UTC m=+2202.143675347" watchObservedRunningTime="2025-12-01 10:08:31.507409806 +0000 UTC m=+2202.149133421" Dec 01 10:08:31 crc kubenswrapper[4933]: I1201 10:08:31.527718 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h2wm2" podStartSLOduration=2.959968275 podStartE2EDuration="5.527686535s" podCreationTimestamp="2025-12-01 10:08:26 +0000 UTC" firstStartedPulling="2025-12-01 10:08:28.412197075 +0000 UTC m=+2199.053920690" lastFinishedPulling="2025-12-01 10:08:30.979915335 +0000 UTC m=+2201.621638950" observedRunningTime="2025-12-01 10:08:31.521557874 +0000 UTC m=+2202.163281479" watchObservedRunningTime="2025-12-01 10:08:31.527686535 +0000 UTC m=+2202.169410150" Dec 01 10:08:37 crc kubenswrapper[4933]: I1201 10:08:37.040924 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:37 crc kubenswrapper[4933]: I1201 10:08:37.042208 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:37 crc kubenswrapper[4933]: I1201 10:08:37.106532 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:37 crc kubenswrapper[4933]: I1201 10:08:37.606826 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:37 crc kubenswrapper[4933]: I1201 10:08:37.799328 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:37 crc kubenswrapper[4933]: I1201 10:08:37.799812 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:37 crc kubenswrapper[4933]: I1201 10:08:37.849463 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:38 crc kubenswrapper[4933]: I1201 10:08:38.616371 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:38 crc kubenswrapper[4933]: I1201 10:08:38.758045 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h2wm2"] Dec 01 10:08:39 crc kubenswrapper[4933]: I1201 10:08:39.574699 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h2wm2" podUID="6175b642-499b-42f8-acdc-490efb4a64f7" containerName="registry-server" containerID="cri-o://e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff" gracePeriod=2 Dec 01 10:08:39 crc kubenswrapper[4933]: E1201 10:08:39.820369 4933 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6175b642_499b_42f8_acdc_490efb4a64f7.slice/crio-e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff.scope\": RecentStats: unable to find data in memory cache]" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.153772 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbwt2"] Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.574064 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.587109 4933 generic.go:334] "Generic (PLEG): container finished" podID="6175b642-499b-42f8-acdc-490efb4a64f7" containerID="e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff" exitCode=0 Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.587158 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h2wm2" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.587168 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2wm2" event={"ID":"6175b642-499b-42f8-acdc-490efb4a64f7","Type":"ContainerDied","Data":"e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff"} Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.587214 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2wm2" event={"ID":"6175b642-499b-42f8-acdc-490efb4a64f7","Type":"ContainerDied","Data":"5319f3cb4532ae5a0b269bb1bbef31964f6c253e5a8210b0ffdc6ba62fd3cb30"} Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.587234 4933 scope.go:117] "RemoveContainer" containerID="e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.621014 4933 scope.go:117] "RemoveContainer" containerID="893c973f98f313a8fc6c7e87e5a51fd338f9b10df9965b4ac830f182d63ba072" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.647660 4933 scope.go:117] "RemoveContainer" containerID="ef1f75429bc68c536f31444742056c918dd2d9ffc2b1c984fb03c03eab7e100a" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.696992 4933 scope.go:117] "RemoveContainer" containerID="e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff" Dec 01 10:08:40 crc kubenswrapper[4933]: E1201 10:08:40.698976 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff\": container with ID starting with e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff not found: ID does not exist" containerID="e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.699021 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff"} err="failed to get container status \"e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff\": rpc error: code = NotFound desc = could not find container \"e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff\": container with ID starting with e1ad87b67240842047ef95fd45354b36ca081641812559b70e4584bb7d25e0ff not found: ID does not exist" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.699083 4933 scope.go:117] "RemoveContainer" containerID="893c973f98f313a8fc6c7e87e5a51fd338f9b10df9965b4ac830f182d63ba072" Dec 01 10:08:40 crc kubenswrapper[4933]: E1201 10:08:40.700640 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"893c973f98f313a8fc6c7e87e5a51fd338f9b10df9965b4ac830f182d63ba072\": container with ID starting with 893c973f98f313a8fc6c7e87e5a51fd338f9b10df9965b4ac830f182d63ba072 not found: ID does not exist" containerID="893c973f98f313a8fc6c7e87e5a51fd338f9b10df9965b4ac830f182d63ba072" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.700677 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"893c973f98f313a8fc6c7e87e5a51fd338f9b10df9965b4ac830f182d63ba072"} err="failed to get container status \"893c973f98f313a8fc6c7e87e5a51fd338f9b10df9965b4ac830f182d63ba072\": rpc error: code = NotFound desc = could not find container \"893c973f98f313a8fc6c7e87e5a51fd338f9b10df9965b4ac830f182d63ba072\": container with ID starting with 893c973f98f313a8fc6c7e87e5a51fd338f9b10df9965b4ac830f182d63ba072 not found: ID does not exist" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.700696 4933 scope.go:117] "RemoveContainer" containerID="ef1f75429bc68c536f31444742056c918dd2d9ffc2b1c984fb03c03eab7e100a" Dec 01 10:08:40 crc kubenswrapper[4933]: E1201 10:08:40.701776 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef1f75429bc68c536f31444742056c918dd2d9ffc2b1c984fb03c03eab7e100a\": container with ID starting with ef1f75429bc68c536f31444742056c918dd2d9ffc2b1c984fb03c03eab7e100a not found: ID does not exist" containerID="ef1f75429bc68c536f31444742056c918dd2d9ffc2b1c984fb03c03eab7e100a" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.701802 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1f75429bc68c536f31444742056c918dd2d9ffc2b1c984fb03c03eab7e100a"} err="failed to get container status \"ef1f75429bc68c536f31444742056c918dd2d9ffc2b1c984fb03c03eab7e100a\": rpc error: code = NotFound desc = could not find container \"ef1f75429bc68c536f31444742056c918dd2d9ffc2b1c984fb03c03eab7e100a\": container with ID starting with ef1f75429bc68c536f31444742056c918dd2d9ffc2b1c984fb03c03eab7e100a not found: ID does not exist" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.751803 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp6ds\" (UniqueName: \"kubernetes.io/projected/6175b642-499b-42f8-acdc-490efb4a64f7-kube-api-access-mp6ds\") pod \"6175b642-499b-42f8-acdc-490efb4a64f7\" (UID: \"6175b642-499b-42f8-acdc-490efb4a64f7\") " Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.752212 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6175b642-499b-42f8-acdc-490efb4a64f7-catalog-content\") pod \"6175b642-499b-42f8-acdc-490efb4a64f7\" (UID: \"6175b642-499b-42f8-acdc-490efb4a64f7\") " Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.752434 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6175b642-499b-42f8-acdc-490efb4a64f7-utilities\") pod \"6175b642-499b-42f8-acdc-490efb4a64f7\" (UID: \"6175b642-499b-42f8-acdc-490efb4a64f7\") " Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.753841 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6175b642-499b-42f8-acdc-490efb4a64f7-utilities" (OuterVolumeSpecName: "utilities") pod "6175b642-499b-42f8-acdc-490efb4a64f7" (UID: "6175b642-499b-42f8-acdc-490efb4a64f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.760448 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6175b642-499b-42f8-acdc-490efb4a64f7-kube-api-access-mp6ds" (OuterVolumeSpecName: "kube-api-access-mp6ds") pod "6175b642-499b-42f8-acdc-490efb4a64f7" (UID: "6175b642-499b-42f8-acdc-490efb4a64f7"). InnerVolumeSpecName "kube-api-access-mp6ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.804221 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6175b642-499b-42f8-acdc-490efb4a64f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6175b642-499b-42f8-acdc-490efb4a64f7" (UID: "6175b642-499b-42f8-acdc-490efb4a64f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.856193 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp6ds\" (UniqueName: \"kubernetes.io/projected/6175b642-499b-42f8-acdc-490efb4a64f7-kube-api-access-mp6ds\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.856262 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6175b642-499b-42f8-acdc-490efb4a64f7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.856273 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6175b642-499b-42f8-acdc-490efb4a64f7-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.926785 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h2wm2"] Dec 01 10:08:40 crc kubenswrapper[4933]: I1201 10:08:40.937006 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h2wm2"] Dec 01 10:08:41 crc kubenswrapper[4933]: I1201 10:08:41.598033 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lbwt2" podUID="0e3e7235-ac43-4e4e-add1-b7af2048b4ac" containerName="registry-server" containerID="cri-o://5d4749233df626e9e6ad0f492d63262762c290d7b31151d7778bfaba677a05fd" gracePeriod=2 Dec 01 10:08:41 crc kubenswrapper[4933]: I1201 10:08:41.680041 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6175b642-499b-42f8-acdc-490efb4a64f7" path="/var/lib/kubelet/pods/6175b642-499b-42f8-acdc-490efb4a64f7/volumes" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.092586 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.283940 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-catalog-content\") pod \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\" (UID: \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\") " Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.284024 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl857\" (UniqueName: \"kubernetes.io/projected/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-kube-api-access-nl857\") pod \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\" (UID: \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\") " Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.284212 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-utilities\") pod \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\" (UID: \"0e3e7235-ac43-4e4e-add1-b7af2048b4ac\") " Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.285626 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-utilities" (OuterVolumeSpecName: "utilities") pod "0e3e7235-ac43-4e4e-add1-b7af2048b4ac" (UID: "0e3e7235-ac43-4e4e-add1-b7af2048b4ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.289656 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-kube-api-access-nl857" (OuterVolumeSpecName: "kube-api-access-nl857") pod "0e3e7235-ac43-4e4e-add1-b7af2048b4ac" (UID: "0e3e7235-ac43-4e4e-add1-b7af2048b4ac"). InnerVolumeSpecName "kube-api-access-nl857". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.306022 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e3e7235-ac43-4e4e-add1-b7af2048b4ac" (UID: "0e3e7235-ac43-4e4e-add1-b7af2048b4ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.387373 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.387442 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl857\" (UniqueName: \"kubernetes.io/projected/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-kube-api-access-nl857\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.387458 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e7235-ac43-4e4e-add1-b7af2048b4ac-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.608980 4933 generic.go:334] "Generic (PLEG): container finished" podID="0e3e7235-ac43-4e4e-add1-b7af2048b4ac" containerID="5d4749233df626e9e6ad0f492d63262762c290d7b31151d7778bfaba677a05fd" exitCode=0 Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.609038 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbwt2" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.609035 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbwt2" event={"ID":"0e3e7235-ac43-4e4e-add1-b7af2048b4ac","Type":"ContainerDied","Data":"5d4749233df626e9e6ad0f492d63262762c290d7b31151d7778bfaba677a05fd"} Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.609266 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbwt2" event={"ID":"0e3e7235-ac43-4e4e-add1-b7af2048b4ac","Type":"ContainerDied","Data":"bd93d1e14ffa1d71835187dd208f463bf11d10b6a64fef4629b611359dccb1a0"} Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.609317 4933 scope.go:117] "RemoveContainer" containerID="5d4749233df626e9e6ad0f492d63262762c290d7b31151d7778bfaba677a05fd" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.634012 4933 scope.go:117] "RemoveContainer" containerID="3b7baa8c31e00ffd3fa05e3b570f7a7e9568b51f74d6f61d00cfdfbf5f0ddf8a" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.654994 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbwt2"] Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.669492 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbwt2"] Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.688874 4933 scope.go:117] "RemoveContainer" containerID="7c6c6c6d6f6b0107f0c4fac4a20da3e673531512391d99621253949a9749a6a2" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.730552 4933 scope.go:117] "RemoveContainer" containerID="5d4749233df626e9e6ad0f492d63262762c290d7b31151d7778bfaba677a05fd" Dec 01 10:08:42 crc kubenswrapper[4933]: E1201 10:08:42.731054 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d4749233df626e9e6ad0f492d63262762c290d7b31151d7778bfaba677a05fd\": container with ID starting with 5d4749233df626e9e6ad0f492d63262762c290d7b31151d7778bfaba677a05fd not found: ID does not exist" containerID="5d4749233df626e9e6ad0f492d63262762c290d7b31151d7778bfaba677a05fd" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.731104 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d4749233df626e9e6ad0f492d63262762c290d7b31151d7778bfaba677a05fd"} err="failed to get container status \"5d4749233df626e9e6ad0f492d63262762c290d7b31151d7778bfaba677a05fd\": rpc error: code = NotFound desc = could not find container \"5d4749233df626e9e6ad0f492d63262762c290d7b31151d7778bfaba677a05fd\": container with ID starting with 5d4749233df626e9e6ad0f492d63262762c290d7b31151d7778bfaba677a05fd not found: ID does not exist" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.731135 4933 scope.go:117] "RemoveContainer" containerID="3b7baa8c31e00ffd3fa05e3b570f7a7e9568b51f74d6f61d00cfdfbf5f0ddf8a" Dec 01 10:08:42 crc kubenswrapper[4933]: E1201 10:08:42.731489 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7baa8c31e00ffd3fa05e3b570f7a7e9568b51f74d6f61d00cfdfbf5f0ddf8a\": container with ID starting with 3b7baa8c31e00ffd3fa05e3b570f7a7e9568b51f74d6f61d00cfdfbf5f0ddf8a not found: ID does not exist" containerID="3b7baa8c31e00ffd3fa05e3b570f7a7e9568b51f74d6f61d00cfdfbf5f0ddf8a" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.731538 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b7baa8c31e00ffd3fa05e3b570f7a7e9568b51f74d6f61d00cfdfbf5f0ddf8a"} err="failed to get container status \"3b7baa8c31e00ffd3fa05e3b570f7a7e9568b51f74d6f61d00cfdfbf5f0ddf8a\": rpc error: code = NotFound desc = could not find container \"3b7baa8c31e00ffd3fa05e3b570f7a7e9568b51f74d6f61d00cfdfbf5f0ddf8a\": container with ID starting with 3b7baa8c31e00ffd3fa05e3b570f7a7e9568b51f74d6f61d00cfdfbf5f0ddf8a not found: ID does not exist" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.731571 4933 scope.go:117] "RemoveContainer" containerID="7c6c6c6d6f6b0107f0c4fac4a20da3e673531512391d99621253949a9749a6a2" Dec 01 10:08:42 crc kubenswrapper[4933]: E1201 10:08:42.731837 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6c6c6d6f6b0107f0c4fac4a20da3e673531512391d99621253949a9749a6a2\": container with ID starting with 7c6c6c6d6f6b0107f0c4fac4a20da3e673531512391d99621253949a9749a6a2 not found: ID does not exist" containerID="7c6c6c6d6f6b0107f0c4fac4a20da3e673531512391d99621253949a9749a6a2" Dec 01 10:08:42 crc kubenswrapper[4933]: I1201 10:08:42.731859 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6c6c6d6f6b0107f0c4fac4a20da3e673531512391d99621253949a9749a6a2"} err="failed to get container status \"7c6c6c6d6f6b0107f0c4fac4a20da3e673531512391d99621253949a9749a6a2\": rpc error: code = NotFound desc = could not find container \"7c6c6c6d6f6b0107f0c4fac4a20da3e673531512391d99621253949a9749a6a2\": container with ID starting with 7c6c6c6d6f6b0107f0c4fac4a20da3e673531512391d99621253949a9749a6a2 not found: ID does not exist" Dec 01 10:08:43 crc kubenswrapper[4933]: I1201 10:08:43.680568 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3e7235-ac43-4e4e-add1-b7af2048b4ac" path="/var/lib/kubelet/pods/0e3e7235-ac43-4e4e-add1-b7af2048b4ac/volumes" Dec 01 10:08:46 crc kubenswrapper[4933]: I1201 10:08:46.654100 4933 generic.go:334] "Generic (PLEG): container finished" podID="5242466a-3061-4db5-b9dd-77f6bff70350" containerID="94cecd913c657c9a535786483330f88b709b1be8fc4bb20a034c22ecc3c08b35" exitCode=0 Dec 01 10:08:46 crc kubenswrapper[4933]: I1201 10:08:46.654246 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" event={"ID":"5242466a-3061-4db5-b9dd-77f6bff70350","Type":"ContainerDied","Data":"94cecd913c657c9a535786483330f88b709b1be8fc4bb20a034c22ecc3c08b35"} Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.074553 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.114665 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn54p\" (UniqueName: \"kubernetes.io/projected/5242466a-3061-4db5-b9dd-77f6bff70350-kube-api-access-xn54p\") pod \"5242466a-3061-4db5-b9dd-77f6bff70350\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.114730 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-neutron-ovn-metadata-agent-neutron-config-0\") pod \"5242466a-3061-4db5-b9dd-77f6bff70350\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.114786 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-inventory\") pod \"5242466a-3061-4db5-b9dd-77f6bff70350\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.114841 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-nova-metadata-neutron-config-0\") pod \"5242466a-3061-4db5-b9dd-77f6bff70350\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.114874 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-neutron-metadata-combined-ca-bundle\") pod \"5242466a-3061-4db5-b9dd-77f6bff70350\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.114907 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-ssh-key\") pod \"5242466a-3061-4db5-b9dd-77f6bff70350\" (UID: \"5242466a-3061-4db5-b9dd-77f6bff70350\") " Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.122653 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5242466a-3061-4db5-b9dd-77f6bff70350" (UID: "5242466a-3061-4db5-b9dd-77f6bff70350"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.128477 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5242466a-3061-4db5-b9dd-77f6bff70350-kube-api-access-xn54p" (OuterVolumeSpecName: "kube-api-access-xn54p") pod "5242466a-3061-4db5-b9dd-77f6bff70350" (UID: "5242466a-3061-4db5-b9dd-77f6bff70350"). InnerVolumeSpecName "kube-api-access-xn54p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.146061 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "5242466a-3061-4db5-b9dd-77f6bff70350" (UID: "5242466a-3061-4db5-b9dd-77f6bff70350"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.148205 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-inventory" (OuterVolumeSpecName: "inventory") pod "5242466a-3061-4db5-b9dd-77f6bff70350" (UID: "5242466a-3061-4db5-b9dd-77f6bff70350"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.155880 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5242466a-3061-4db5-b9dd-77f6bff70350" (UID: "5242466a-3061-4db5-b9dd-77f6bff70350"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.166675 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "5242466a-3061-4db5-b9dd-77f6bff70350" (UID: "5242466a-3061-4db5-b9dd-77f6bff70350"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.217320 4933 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.217364 4933 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.217378 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.217390 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn54p\" (UniqueName: \"kubernetes.io/projected/5242466a-3061-4db5-b9dd-77f6bff70350-kube-api-access-xn54p\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.217403 4933 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.217415 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5242466a-3061-4db5-b9dd-77f6bff70350-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.676387 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" event={"ID":"5242466a-3061-4db5-b9dd-77f6bff70350","Type":"ContainerDied","Data":"77bc33296bf291c9a70f0e2b34a14d65c6cb07722554e8d3f4c83b23d9c55523"} Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.676443 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77bc33296bf291c9a70f0e2b34a14d65c6cb07722554e8d3f4c83b23d9c55523" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.676795 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.835100 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj"] Dec 01 10:08:48 crc kubenswrapper[4933]: E1201 10:08:48.836495 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5242466a-3061-4db5-b9dd-77f6bff70350" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.836568 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="5242466a-3061-4db5-b9dd-77f6bff70350" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 01 10:08:48 crc kubenswrapper[4933]: E1201 10:08:48.836637 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6175b642-499b-42f8-acdc-490efb4a64f7" containerName="registry-server" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.836689 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="6175b642-499b-42f8-acdc-490efb4a64f7" containerName="registry-server" Dec 01 10:08:48 crc kubenswrapper[4933]: E1201 10:08:48.836749 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3e7235-ac43-4e4e-add1-b7af2048b4ac" containerName="registry-server" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.836800 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3e7235-ac43-4e4e-add1-b7af2048b4ac" containerName="registry-server" Dec 01 10:08:48 crc kubenswrapper[4933]: E1201 10:08:48.836874 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6175b642-499b-42f8-acdc-490efb4a64f7" containerName="extract-utilities" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.836927 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="6175b642-499b-42f8-acdc-490efb4a64f7" containerName="extract-utilities" Dec 01 10:08:48 crc kubenswrapper[4933]: E1201 10:08:48.836981 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3e7235-ac43-4e4e-add1-b7af2048b4ac" containerName="extract-utilities" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.837030 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3e7235-ac43-4e4e-add1-b7af2048b4ac" containerName="extract-utilities" Dec 01 10:08:48 crc kubenswrapper[4933]: E1201 10:08:48.837083 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6175b642-499b-42f8-acdc-490efb4a64f7" containerName="extract-content" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.837135 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="6175b642-499b-42f8-acdc-490efb4a64f7" containerName="extract-content" Dec 01 10:08:48 crc kubenswrapper[4933]: E1201 10:08:48.837189 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3e7235-ac43-4e4e-add1-b7af2048b4ac" containerName="extract-content" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.837244 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3e7235-ac43-4e4e-add1-b7af2048b4ac" containerName="extract-content" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.837527 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="5242466a-3061-4db5-b9dd-77f6bff70350" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.837596 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3e7235-ac43-4e4e-add1-b7af2048b4ac" containerName="registry-server" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.837675 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="6175b642-499b-42f8-acdc-490efb4a64f7" containerName="registry-server" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.838555 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.846705 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.846705 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.847565 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.848959 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj"] Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.849484 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.849506 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.933275 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.933551 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.933585 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.933619 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9b2m\" (UniqueName: \"kubernetes.io/projected/db1900b9-3716-46b2-9761-18a6721bd258-kube-api-access-q9b2m\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:48 crc kubenswrapper[4933]: I1201 10:08:48.933744 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:49 crc kubenswrapper[4933]: I1201 10:08:49.034931 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:49 crc kubenswrapper[4933]: I1201 10:08:49.035044 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:49 crc kubenswrapper[4933]: I1201 10:08:49.035114 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:49 crc kubenswrapper[4933]: I1201 10:08:49.035140 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:49 crc kubenswrapper[4933]: I1201 10:08:49.035177 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9b2m\" (UniqueName: \"kubernetes.io/projected/db1900b9-3716-46b2-9761-18a6721bd258-kube-api-access-q9b2m\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:49 crc kubenswrapper[4933]: I1201 10:08:49.040708 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:49 crc kubenswrapper[4933]: I1201 10:08:49.040933 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:49 crc kubenswrapper[4933]: I1201 10:08:49.041795 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:49 crc kubenswrapper[4933]: I1201 10:08:49.044207 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:49 crc kubenswrapper[4933]: I1201 10:08:49.071332 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9b2m\" (UniqueName: \"kubernetes.io/projected/db1900b9-3716-46b2-9761-18a6721bd258-kube-api-access-q9b2m\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:49 crc kubenswrapper[4933]: I1201 10:08:49.158735 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:08:49 crc kubenswrapper[4933]: I1201 10:08:49.715578 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj"] Dec 01 10:08:49 crc kubenswrapper[4933]: W1201 10:08:49.718733 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb1900b9_3716_46b2_9761_18a6721bd258.slice/crio-a16e0d79fc681e7fd0fe80266d4425f3056c9f1e03f20894746e352948bacbb7 WatchSource:0}: Error finding container a16e0d79fc681e7fd0fe80266d4425f3056c9f1e03f20894746e352948bacbb7: Status 404 returned error can't find the container with id a16e0d79fc681e7fd0fe80266d4425f3056c9f1e03f20894746e352948bacbb7 Dec 01 10:08:50 crc kubenswrapper[4933]: I1201 10:08:50.209701 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:08:50 crc kubenswrapper[4933]: I1201 10:08:50.702755 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" event={"ID":"db1900b9-3716-46b2-9761-18a6721bd258","Type":"ContainerStarted","Data":"2ed9755d4afbeb44da765d385191846759a85990dc5565a5305852566eb7b20b"} Dec 01 10:08:50 crc kubenswrapper[4933]: I1201 10:08:50.703194 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" event={"ID":"db1900b9-3716-46b2-9761-18a6721bd258","Type":"ContainerStarted","Data":"a16e0d79fc681e7fd0fe80266d4425f3056c9f1e03f20894746e352948bacbb7"} Dec 01 10:08:50 crc kubenswrapper[4933]: I1201 10:08:50.720872 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" podStartSLOduration=2.23462628 podStartE2EDuration="2.720851076s" podCreationTimestamp="2025-12-01 10:08:48 +0000 UTC" firstStartedPulling="2025-12-01 10:08:49.720960898 +0000 UTC m=+2220.362684513" lastFinishedPulling="2025-12-01 10:08:50.207185694 +0000 UTC m=+2220.848909309" observedRunningTime="2025-12-01 10:08:50.717109153 +0000 UTC m=+2221.358832778" watchObservedRunningTime="2025-12-01 10:08:50.720851076 +0000 UTC m=+2221.362574691" Dec 01 10:10:41 crc kubenswrapper[4933]: I1201 10:10:41.741752 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:10:41 crc kubenswrapper[4933]: I1201 10:10:41.742489 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:11:11 crc kubenswrapper[4933]: I1201 10:11:11.741843 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:11:11 crc kubenswrapper[4933]: I1201 10:11:11.742671 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:11:41 crc kubenswrapper[4933]: I1201 10:11:41.742006 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:11:41 crc kubenswrapper[4933]: I1201 10:11:41.743330 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:11:41 crc kubenswrapper[4933]: I1201 10:11:41.743422 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 10:11:41 crc kubenswrapper[4933]: I1201 10:11:41.744822 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 10:11:41 crc kubenswrapper[4933]: I1201 10:11:41.744889 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" gracePeriod=600 Dec 01 10:11:41 crc kubenswrapper[4933]: E1201 10:11:41.876996 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:11:42 crc kubenswrapper[4933]: I1201 10:11:42.566267 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" exitCode=0 Dec 01 10:11:42 crc kubenswrapper[4933]: I1201 10:11:42.566347 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907"} Dec 01 10:11:42 crc kubenswrapper[4933]: I1201 10:11:42.566443 4933 scope.go:117] "RemoveContainer" containerID="7a98bf93ed27d22c27782b178fc01678ac5109b9d40b5fc3e17d08873b6a98b6" Dec 01 10:11:42 crc kubenswrapper[4933]: I1201 10:11:42.569798 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:11:42 crc kubenswrapper[4933]: E1201 10:11:42.570564 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:11:47 crc kubenswrapper[4933]: I1201 10:11:47.434262 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fbmjj"] Dec 01 10:11:47 crc kubenswrapper[4933]: I1201 10:11:47.436830 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:47 crc kubenswrapper[4933]: I1201 10:11:47.455484 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbmjj"] Dec 01 10:11:47 crc kubenswrapper[4933]: I1201 10:11:47.578421 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85994be-f09d-408e-88db-55639819d7ee-utilities\") pod \"redhat-operators-fbmjj\" (UID: \"c85994be-f09d-408e-88db-55639819d7ee\") " pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:47 crc kubenswrapper[4933]: I1201 10:11:47.578476 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85994be-f09d-408e-88db-55639819d7ee-catalog-content\") pod \"redhat-operators-fbmjj\" (UID: \"c85994be-f09d-408e-88db-55639819d7ee\") " pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:47 crc kubenswrapper[4933]: I1201 10:11:47.578540 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbtzf\" (UniqueName: \"kubernetes.io/projected/c85994be-f09d-408e-88db-55639819d7ee-kube-api-access-xbtzf\") pod \"redhat-operators-fbmjj\" (UID: \"c85994be-f09d-408e-88db-55639819d7ee\") " pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:47 crc kubenswrapper[4933]: I1201 10:11:47.679976 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85994be-f09d-408e-88db-55639819d7ee-utilities\") pod \"redhat-operators-fbmjj\" (UID: \"c85994be-f09d-408e-88db-55639819d7ee\") " pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:47 crc kubenswrapper[4933]: I1201 10:11:47.680027 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85994be-f09d-408e-88db-55639819d7ee-catalog-content\") pod \"redhat-operators-fbmjj\" (UID: \"c85994be-f09d-408e-88db-55639819d7ee\") " pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:47 crc kubenswrapper[4933]: I1201 10:11:47.680106 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbtzf\" (UniqueName: \"kubernetes.io/projected/c85994be-f09d-408e-88db-55639819d7ee-kube-api-access-xbtzf\") pod \"redhat-operators-fbmjj\" (UID: \"c85994be-f09d-408e-88db-55639819d7ee\") " pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:47 crc kubenswrapper[4933]: I1201 10:11:47.680660 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85994be-f09d-408e-88db-55639819d7ee-utilities\") pod \"redhat-operators-fbmjj\" (UID: \"c85994be-f09d-408e-88db-55639819d7ee\") " pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:47 crc kubenswrapper[4933]: I1201 10:11:47.680737 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85994be-f09d-408e-88db-55639819d7ee-catalog-content\") pod \"redhat-operators-fbmjj\" (UID: \"c85994be-f09d-408e-88db-55639819d7ee\") " pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:47 crc kubenswrapper[4933]: I1201 10:11:47.709863 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbtzf\" (UniqueName: \"kubernetes.io/projected/c85994be-f09d-408e-88db-55639819d7ee-kube-api-access-xbtzf\") pod \"redhat-operators-fbmjj\" (UID: \"c85994be-f09d-408e-88db-55639819d7ee\") " pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:47 crc kubenswrapper[4933]: I1201 10:11:47.759795 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:48 crc kubenswrapper[4933]: I1201 10:11:48.279078 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbmjj"] Dec 01 10:11:48 crc kubenswrapper[4933]: I1201 10:11:48.661844 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmjj" event={"ID":"c85994be-f09d-408e-88db-55639819d7ee","Type":"ContainerStarted","Data":"95e444d8692b578b49c47da8b8fc998c174d42554a4e7a0b60e03eba15839b68"} Dec 01 10:11:49 crc kubenswrapper[4933]: I1201 10:11:49.731714 4933 generic.go:334] "Generic (PLEG): container finished" podID="c85994be-f09d-408e-88db-55639819d7ee" containerID="c541325c701399b3581d2a57aa7bb4fa546436d5d3308fcac8959949a6c2daff" exitCode=0 Dec 01 10:11:49 crc kubenswrapper[4933]: I1201 10:11:49.736470 4933 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 10:11:49 crc kubenswrapper[4933]: I1201 10:11:49.752092 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmjj" event={"ID":"c85994be-f09d-408e-88db-55639819d7ee","Type":"ContainerDied","Data":"c541325c701399b3581d2a57aa7bb4fa546436d5d3308fcac8959949a6c2daff"} Dec 01 10:11:50 crc kubenswrapper[4933]: I1201 10:11:50.742999 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmjj" event={"ID":"c85994be-f09d-408e-88db-55639819d7ee","Type":"ContainerStarted","Data":"e649d82ad346ed2c8db65c42fe4fd8f2f46479430dcd5286b72759fd979b9487"} Dec 01 10:11:51 crc kubenswrapper[4933]: I1201 10:11:51.770393 4933 generic.go:334] "Generic (PLEG): container finished" podID="c85994be-f09d-408e-88db-55639819d7ee" containerID="e649d82ad346ed2c8db65c42fe4fd8f2f46479430dcd5286b72759fd979b9487" exitCode=0 Dec 01 10:11:51 crc kubenswrapper[4933]: I1201 10:11:51.771528 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmjj" event={"ID":"c85994be-f09d-408e-88db-55639819d7ee","Type":"ContainerDied","Data":"e649d82ad346ed2c8db65c42fe4fd8f2f46479430dcd5286b72759fd979b9487"} Dec 01 10:11:52 crc kubenswrapper[4933]: I1201 10:11:52.784368 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmjj" event={"ID":"c85994be-f09d-408e-88db-55639819d7ee","Type":"ContainerStarted","Data":"ea5197fa021a942c5ba96e90eda726bbc34410b771661b01893cf102d41f8010"} Dec 01 10:11:52 crc kubenswrapper[4933]: I1201 10:11:52.805830 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fbmjj" podStartSLOduration=3.026867196 podStartE2EDuration="5.805809207s" podCreationTimestamp="2025-12-01 10:11:47 +0000 UTC" firstStartedPulling="2025-12-01 10:11:49.736102791 +0000 UTC m=+2400.377826406" lastFinishedPulling="2025-12-01 10:11:52.515044802 +0000 UTC m=+2403.156768417" observedRunningTime="2025-12-01 10:11:52.801439159 +0000 UTC m=+2403.443162794" watchObservedRunningTime="2025-12-01 10:11:52.805809207 +0000 UTC m=+2403.447532822" Dec 01 10:11:57 crc kubenswrapper[4933]: I1201 10:11:57.669047 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:11:57 crc kubenswrapper[4933]: E1201 10:11:57.669832 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:11:57 crc kubenswrapper[4933]: I1201 10:11:57.760424 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:57 crc kubenswrapper[4933]: I1201 10:11:57.760923 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:57 crc kubenswrapper[4933]: I1201 10:11:57.815902 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:57 crc kubenswrapper[4933]: I1201 10:11:57.880056 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:11:58 crc kubenswrapper[4933]: I1201 10:11:58.427473 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbmjj"] Dec 01 10:11:59 crc kubenswrapper[4933]: I1201 10:11:59.862829 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fbmjj" podUID="c85994be-f09d-408e-88db-55639819d7ee" containerName="registry-server" containerID="cri-o://ea5197fa021a942c5ba96e90eda726bbc34410b771661b01893cf102d41f8010" gracePeriod=2 Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.807445 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.875475 4933 generic.go:334] "Generic (PLEG): container finished" podID="c85994be-f09d-408e-88db-55639819d7ee" containerID="ea5197fa021a942c5ba96e90eda726bbc34410b771661b01893cf102d41f8010" exitCode=0 Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.875570 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmjj" event={"ID":"c85994be-f09d-408e-88db-55639819d7ee","Type":"ContainerDied","Data":"ea5197fa021a942c5ba96e90eda726bbc34410b771661b01893cf102d41f8010"} Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.876528 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmjj" event={"ID":"c85994be-f09d-408e-88db-55639819d7ee","Type":"ContainerDied","Data":"95e444d8692b578b49c47da8b8fc998c174d42554a4e7a0b60e03eba15839b68"} Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.876607 4933 scope.go:117] "RemoveContainer" containerID="ea5197fa021a942c5ba96e90eda726bbc34410b771661b01893cf102d41f8010" Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.875627 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbmjj" Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.887161 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85994be-f09d-408e-88db-55639819d7ee-utilities\") pod \"c85994be-f09d-408e-88db-55639819d7ee\" (UID: \"c85994be-f09d-408e-88db-55639819d7ee\") " Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.887315 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85994be-f09d-408e-88db-55639819d7ee-catalog-content\") pod \"c85994be-f09d-408e-88db-55639819d7ee\" (UID: \"c85994be-f09d-408e-88db-55639819d7ee\") " Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.887510 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbtzf\" (UniqueName: \"kubernetes.io/projected/c85994be-f09d-408e-88db-55639819d7ee-kube-api-access-xbtzf\") pod \"c85994be-f09d-408e-88db-55639819d7ee\" (UID: \"c85994be-f09d-408e-88db-55639819d7ee\") " Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.888755 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c85994be-f09d-408e-88db-55639819d7ee-utilities" (OuterVolumeSpecName: "utilities") pod "c85994be-f09d-408e-88db-55639819d7ee" (UID: "c85994be-f09d-408e-88db-55639819d7ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.899814 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85994be-f09d-408e-88db-55639819d7ee-kube-api-access-xbtzf" (OuterVolumeSpecName: "kube-api-access-xbtzf") pod "c85994be-f09d-408e-88db-55639819d7ee" (UID: "c85994be-f09d-408e-88db-55639819d7ee"). InnerVolumeSpecName "kube-api-access-xbtzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.904881 4933 scope.go:117] "RemoveContainer" containerID="e649d82ad346ed2c8db65c42fe4fd8f2f46479430dcd5286b72759fd979b9487" Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.974469 4933 scope.go:117] "RemoveContainer" containerID="c541325c701399b3581d2a57aa7bb4fa546436d5d3308fcac8959949a6c2daff" Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.990126 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85994be-f09d-408e-88db-55639819d7ee-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:12:00 crc kubenswrapper[4933]: I1201 10:12:00.990171 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbtzf\" (UniqueName: \"kubernetes.io/projected/c85994be-f09d-408e-88db-55639819d7ee-kube-api-access-xbtzf\") on node \"crc\" DevicePath \"\"" Dec 01 10:12:01 crc kubenswrapper[4933]: I1201 10:12:01.024899 4933 scope.go:117] "RemoveContainer" containerID="ea5197fa021a942c5ba96e90eda726bbc34410b771661b01893cf102d41f8010" Dec 01 10:12:01 crc kubenswrapper[4933]: E1201 10:12:01.025507 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea5197fa021a942c5ba96e90eda726bbc34410b771661b01893cf102d41f8010\": container with ID starting with ea5197fa021a942c5ba96e90eda726bbc34410b771661b01893cf102d41f8010 not found: ID does not exist" containerID="ea5197fa021a942c5ba96e90eda726bbc34410b771661b01893cf102d41f8010" Dec 01 10:12:01 crc kubenswrapper[4933]: I1201 10:12:01.025563 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea5197fa021a942c5ba96e90eda726bbc34410b771661b01893cf102d41f8010"} err="failed to get container status \"ea5197fa021a942c5ba96e90eda726bbc34410b771661b01893cf102d41f8010\": rpc error: code = NotFound desc = could not find container \"ea5197fa021a942c5ba96e90eda726bbc34410b771661b01893cf102d41f8010\": container with ID starting with ea5197fa021a942c5ba96e90eda726bbc34410b771661b01893cf102d41f8010 not found: ID does not exist" Dec 01 10:12:01 crc kubenswrapper[4933]: I1201 10:12:01.025590 4933 scope.go:117] "RemoveContainer" containerID="e649d82ad346ed2c8db65c42fe4fd8f2f46479430dcd5286b72759fd979b9487" Dec 01 10:12:01 crc kubenswrapper[4933]: E1201 10:12:01.026289 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e649d82ad346ed2c8db65c42fe4fd8f2f46479430dcd5286b72759fd979b9487\": container with ID starting with e649d82ad346ed2c8db65c42fe4fd8f2f46479430dcd5286b72759fd979b9487 not found: ID does not exist" containerID="e649d82ad346ed2c8db65c42fe4fd8f2f46479430dcd5286b72759fd979b9487" Dec 01 10:12:01 crc kubenswrapper[4933]: I1201 10:12:01.026332 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e649d82ad346ed2c8db65c42fe4fd8f2f46479430dcd5286b72759fd979b9487"} err="failed to get container status \"e649d82ad346ed2c8db65c42fe4fd8f2f46479430dcd5286b72759fd979b9487\": rpc error: code = NotFound desc = could not find container \"e649d82ad346ed2c8db65c42fe4fd8f2f46479430dcd5286b72759fd979b9487\": container with ID starting with e649d82ad346ed2c8db65c42fe4fd8f2f46479430dcd5286b72759fd979b9487 not found: ID does not exist" Dec 01 10:12:01 crc kubenswrapper[4933]: I1201 10:12:01.026355 4933 scope.go:117] "RemoveContainer" containerID="c541325c701399b3581d2a57aa7bb4fa546436d5d3308fcac8959949a6c2daff" Dec 01 10:12:01 crc kubenswrapper[4933]: E1201 10:12:01.026626 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c541325c701399b3581d2a57aa7bb4fa546436d5d3308fcac8959949a6c2daff\": container with ID starting with c541325c701399b3581d2a57aa7bb4fa546436d5d3308fcac8959949a6c2daff not found: ID does not exist" containerID="c541325c701399b3581d2a57aa7bb4fa546436d5d3308fcac8959949a6c2daff" Dec 01 10:12:01 crc kubenswrapper[4933]: I1201 10:12:01.026666 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c541325c701399b3581d2a57aa7bb4fa546436d5d3308fcac8959949a6c2daff"} err="failed to get container status \"c541325c701399b3581d2a57aa7bb4fa546436d5d3308fcac8959949a6c2daff\": rpc error: code = NotFound desc = could not find container \"c541325c701399b3581d2a57aa7bb4fa546436d5d3308fcac8959949a6c2daff\": container with ID starting with c541325c701399b3581d2a57aa7bb4fa546436d5d3308fcac8959949a6c2daff not found: ID does not exist" Dec 01 10:12:01 crc kubenswrapper[4933]: I1201 10:12:01.027496 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c85994be-f09d-408e-88db-55639819d7ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c85994be-f09d-408e-88db-55639819d7ee" (UID: "c85994be-f09d-408e-88db-55639819d7ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:12:01 crc kubenswrapper[4933]: I1201 10:12:01.092328 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85994be-f09d-408e-88db-55639819d7ee-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:12:01 crc kubenswrapper[4933]: I1201 10:12:01.220489 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbmjj"] Dec 01 10:12:01 crc kubenswrapper[4933]: I1201 10:12:01.236065 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fbmjj"] Dec 01 10:12:01 crc kubenswrapper[4933]: I1201 10:12:01.686782 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c85994be-f09d-408e-88db-55639819d7ee" path="/var/lib/kubelet/pods/c85994be-f09d-408e-88db-55639819d7ee/volumes" Dec 01 10:12:08 crc kubenswrapper[4933]: I1201 10:12:08.668818 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:12:08 crc kubenswrapper[4933]: E1201 10:12:08.669900 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:12:22 crc kubenswrapper[4933]: I1201 10:12:22.668388 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:12:22 crc kubenswrapper[4933]: E1201 10:12:22.669390 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:12:37 crc kubenswrapper[4933]: I1201 10:12:37.667609 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:12:37 crc kubenswrapper[4933]: E1201 10:12:37.668585 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:12:52 crc kubenswrapper[4933]: I1201 10:12:52.669058 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:12:52 crc kubenswrapper[4933]: E1201 10:12:52.669947 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:13:07 crc kubenswrapper[4933]: I1201 10:13:07.668225 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:13:07 crc kubenswrapper[4933]: E1201 10:13:07.669547 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:13:08 crc kubenswrapper[4933]: I1201 10:13:08.534555 4933 generic.go:334] "Generic (PLEG): container finished" podID="db1900b9-3716-46b2-9761-18a6721bd258" containerID="2ed9755d4afbeb44da765d385191846759a85990dc5565a5305852566eb7b20b" exitCode=0 Dec 01 10:13:08 crc kubenswrapper[4933]: I1201 10:13:08.534650 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" event={"ID":"db1900b9-3716-46b2-9761-18a6721bd258","Type":"ContainerDied","Data":"2ed9755d4afbeb44da765d385191846759a85990dc5565a5305852566eb7b20b"} Dec 01 10:13:09 crc kubenswrapper[4933]: I1201 10:13:09.976994 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.119547 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-inventory\") pod \"db1900b9-3716-46b2-9761-18a6721bd258\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.119662 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-libvirt-secret-0\") pod \"db1900b9-3716-46b2-9761-18a6721bd258\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.119710 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9b2m\" (UniqueName: \"kubernetes.io/projected/db1900b9-3716-46b2-9761-18a6721bd258-kube-api-access-q9b2m\") pod \"db1900b9-3716-46b2-9761-18a6721bd258\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.119744 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-libvirt-combined-ca-bundle\") pod \"db1900b9-3716-46b2-9761-18a6721bd258\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.119819 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-ssh-key\") pod \"db1900b9-3716-46b2-9761-18a6721bd258\" (UID: \"db1900b9-3716-46b2-9761-18a6721bd258\") " Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.125609 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "db1900b9-3716-46b2-9761-18a6721bd258" (UID: "db1900b9-3716-46b2-9761-18a6721bd258"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.126180 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db1900b9-3716-46b2-9761-18a6721bd258-kube-api-access-q9b2m" (OuterVolumeSpecName: "kube-api-access-q9b2m") pod "db1900b9-3716-46b2-9761-18a6721bd258" (UID: "db1900b9-3716-46b2-9761-18a6721bd258"). InnerVolumeSpecName "kube-api-access-q9b2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.146405 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "db1900b9-3716-46b2-9761-18a6721bd258" (UID: "db1900b9-3716-46b2-9761-18a6721bd258"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.147815 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "db1900b9-3716-46b2-9761-18a6721bd258" (UID: "db1900b9-3716-46b2-9761-18a6721bd258"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.148886 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-inventory" (OuterVolumeSpecName: "inventory") pod "db1900b9-3716-46b2-9761-18a6721bd258" (UID: "db1900b9-3716-46b2-9761-18a6721bd258"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.222911 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.223329 4933 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.223557 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9b2m\" (UniqueName: \"kubernetes.io/projected/db1900b9-3716-46b2-9761-18a6721bd258-kube-api-access-q9b2m\") on node \"crc\" DevicePath \"\"" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.223678 4933 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.223822 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db1900b9-3716-46b2-9761-18a6721bd258-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.570711 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" event={"ID":"db1900b9-3716-46b2-9761-18a6721bd258","Type":"ContainerDied","Data":"a16e0d79fc681e7fd0fe80266d4425f3056c9f1e03f20894746e352948bacbb7"} Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.571078 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16e0d79fc681e7fd0fe80266d4425f3056c9f1e03f20894746e352948bacbb7" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.570879 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.676183 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc"] Dec 01 10:13:10 crc kubenswrapper[4933]: E1201 10:13:10.676919 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85994be-f09d-408e-88db-55639819d7ee" containerName="registry-server" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.676946 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85994be-f09d-408e-88db-55639819d7ee" containerName="registry-server" Dec 01 10:13:10 crc kubenswrapper[4933]: E1201 10:13:10.676967 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1900b9-3716-46b2-9761-18a6721bd258" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.676977 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1900b9-3716-46b2-9761-18a6721bd258" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 01 10:13:10 crc kubenswrapper[4933]: E1201 10:13:10.676994 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85994be-f09d-408e-88db-55639819d7ee" containerName="extract-utilities" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.677002 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85994be-f09d-408e-88db-55639819d7ee" containerName="extract-utilities" Dec 01 10:13:10 crc kubenswrapper[4933]: E1201 10:13:10.677013 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85994be-f09d-408e-88db-55639819d7ee" containerName="extract-content" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.677020 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85994be-f09d-408e-88db-55639819d7ee" containerName="extract-content" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.677262 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85994be-f09d-408e-88db-55639819d7ee" containerName="registry-server" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.677289 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1900b9-3716-46b2-9761-18a6721bd258" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.678345 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.681744 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.682424 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.683161 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.683371 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.683591 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.683838 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.683989 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.688702 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc"] Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.836317 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.836388 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99czs\" (UniqueName: \"kubernetes.io/projected/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-kube-api-access-99czs\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.836465 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.837013 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.837175 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.837229 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.837383 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.837543 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.837584 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.939065 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.939140 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.939166 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.939189 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.939228 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.939251 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.939275 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.939347 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99czs\" (UniqueName: \"kubernetes.io/projected/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-kube-api-access-99czs\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.939413 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.940573 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.945603 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.945775 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.945780 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.945928 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.946837 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.947236 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.947845 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:10 crc kubenswrapper[4933]: I1201 10:13:10.960282 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99czs\" (UniqueName: \"kubernetes.io/projected/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-kube-api-access-99czs\") pod \"nova-edpm-deployment-openstack-edpm-ipam-69txc\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:11 crc kubenswrapper[4933]: I1201 10:13:11.002903 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:13:11 crc kubenswrapper[4933]: I1201 10:13:11.573258 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc"] Dec 01 10:13:11 crc kubenswrapper[4933]: I1201 10:13:11.584279 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" event={"ID":"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d","Type":"ContainerStarted","Data":"13b3e429cef4ff1dc4c78aa9f00fb93e9cfe0d919dcc869cfff1650a3f4ad976"} Dec 01 10:13:13 crc kubenswrapper[4933]: I1201 10:13:13.605662 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" event={"ID":"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d","Type":"ContainerStarted","Data":"64585eec319b65cd1f4fd9005bfb6ff9ad2d8bbafad8c668260c854f2d6155e9"} Dec 01 10:13:13 crc kubenswrapper[4933]: I1201 10:13:13.631813 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" podStartSLOduration=2.431347963 podStartE2EDuration="3.631790353s" podCreationTimestamp="2025-12-01 10:13:10 +0000 UTC" firstStartedPulling="2025-12-01 10:13:11.569454997 +0000 UTC m=+2482.211178612" lastFinishedPulling="2025-12-01 10:13:12.769897387 +0000 UTC m=+2483.411621002" observedRunningTime="2025-12-01 10:13:13.627530338 +0000 UTC m=+2484.269253953" watchObservedRunningTime="2025-12-01 10:13:13.631790353 +0000 UTC m=+2484.273513968" Dec 01 10:13:22 crc kubenswrapper[4933]: I1201 10:13:22.667902 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:13:22 crc kubenswrapper[4933]: E1201 10:13:22.668992 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:13:33 crc kubenswrapper[4933]: I1201 10:13:33.671669 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:13:33 crc kubenswrapper[4933]: E1201 10:13:33.672610 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:13:48 crc kubenswrapper[4933]: I1201 10:13:48.669720 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:13:48 crc kubenswrapper[4933]: E1201 10:13:48.670911 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:13:59 crc kubenswrapper[4933]: I1201 10:13:59.706187 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:13:59 crc kubenswrapper[4933]: E1201 10:13:59.707847 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:14:12 crc kubenswrapper[4933]: I1201 10:14:12.667755 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:14:12 crc kubenswrapper[4933]: E1201 10:14:12.670325 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:14:26 crc kubenswrapper[4933]: I1201 10:14:26.669460 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:14:26 crc kubenswrapper[4933]: E1201 10:14:26.670785 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:14:39 crc kubenswrapper[4933]: I1201 10:14:39.676009 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:14:39 crc kubenswrapper[4933]: E1201 10:14:39.676925 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:14:53 crc kubenswrapper[4933]: I1201 10:14:53.668505 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:14:53 crc kubenswrapper[4933]: E1201 10:14:53.669756 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.180748 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7"] Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.184024 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.190335 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.190466 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.200780 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7"] Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.372034 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82256a60-79d8-4b88-8a87-51f49e0b61c3-config-volume\") pod \"collect-profiles-29409735-chmx7\" (UID: \"82256a60-79d8-4b88-8a87-51f49e0b61c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.372145 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbngh\" (UniqueName: \"kubernetes.io/projected/82256a60-79d8-4b88-8a87-51f49e0b61c3-kube-api-access-vbngh\") pod \"collect-profiles-29409735-chmx7\" (UID: \"82256a60-79d8-4b88-8a87-51f49e0b61c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.372231 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82256a60-79d8-4b88-8a87-51f49e0b61c3-secret-volume\") pod \"collect-profiles-29409735-chmx7\" (UID: \"82256a60-79d8-4b88-8a87-51f49e0b61c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.474809 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82256a60-79d8-4b88-8a87-51f49e0b61c3-config-volume\") pod \"collect-profiles-29409735-chmx7\" (UID: \"82256a60-79d8-4b88-8a87-51f49e0b61c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.475286 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbngh\" (UniqueName: \"kubernetes.io/projected/82256a60-79d8-4b88-8a87-51f49e0b61c3-kube-api-access-vbngh\") pod \"collect-profiles-29409735-chmx7\" (UID: \"82256a60-79d8-4b88-8a87-51f49e0b61c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.475510 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82256a60-79d8-4b88-8a87-51f49e0b61c3-secret-volume\") pod \"collect-profiles-29409735-chmx7\" (UID: \"82256a60-79d8-4b88-8a87-51f49e0b61c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.476296 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82256a60-79d8-4b88-8a87-51f49e0b61c3-config-volume\") pod \"collect-profiles-29409735-chmx7\" (UID: \"82256a60-79d8-4b88-8a87-51f49e0b61c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.493670 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82256a60-79d8-4b88-8a87-51f49e0b61c3-secret-volume\") pod \"collect-profiles-29409735-chmx7\" (UID: \"82256a60-79d8-4b88-8a87-51f49e0b61c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.499435 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbngh\" (UniqueName: \"kubernetes.io/projected/82256a60-79d8-4b88-8a87-51f49e0b61c3-kube-api-access-vbngh\") pod \"collect-profiles-29409735-chmx7\" (UID: \"82256a60-79d8-4b88-8a87-51f49e0b61c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" Dec 01 10:15:00 crc kubenswrapper[4933]: I1201 10:15:00.524477 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" Dec 01 10:15:01 crc kubenswrapper[4933]: I1201 10:15:01.031167 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7"] Dec 01 10:15:01 crc kubenswrapper[4933]: I1201 10:15:01.760563 4933 generic.go:334] "Generic (PLEG): container finished" podID="82256a60-79d8-4b88-8a87-51f49e0b61c3" containerID="abf270e91077da7c6444ab5149a229dcab95af8bcc2469c17e34fcc26e94fe42" exitCode=0 Dec 01 10:15:01 crc kubenswrapper[4933]: I1201 10:15:01.761030 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" event={"ID":"82256a60-79d8-4b88-8a87-51f49e0b61c3","Type":"ContainerDied","Data":"abf270e91077da7c6444ab5149a229dcab95af8bcc2469c17e34fcc26e94fe42"} Dec 01 10:15:01 crc kubenswrapper[4933]: I1201 10:15:01.761070 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" event={"ID":"82256a60-79d8-4b88-8a87-51f49e0b61c3","Type":"ContainerStarted","Data":"74fd4d93c4a89cb7414f9ee0ed0cf29f2b2247e13af2375e2e50eda98f0d6a74"} Dec 01 10:15:03 crc kubenswrapper[4933]: I1201 10:15:03.157215 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" Dec 01 10:15:03 crc kubenswrapper[4933]: I1201 10:15:03.345780 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbngh\" (UniqueName: \"kubernetes.io/projected/82256a60-79d8-4b88-8a87-51f49e0b61c3-kube-api-access-vbngh\") pod \"82256a60-79d8-4b88-8a87-51f49e0b61c3\" (UID: \"82256a60-79d8-4b88-8a87-51f49e0b61c3\") " Dec 01 10:15:03 crc kubenswrapper[4933]: I1201 10:15:03.346129 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82256a60-79d8-4b88-8a87-51f49e0b61c3-secret-volume\") pod \"82256a60-79d8-4b88-8a87-51f49e0b61c3\" (UID: \"82256a60-79d8-4b88-8a87-51f49e0b61c3\") " Dec 01 10:15:03 crc kubenswrapper[4933]: I1201 10:15:03.346249 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82256a60-79d8-4b88-8a87-51f49e0b61c3-config-volume\") pod \"82256a60-79d8-4b88-8a87-51f49e0b61c3\" (UID: \"82256a60-79d8-4b88-8a87-51f49e0b61c3\") " Dec 01 10:15:03 crc kubenswrapper[4933]: I1201 10:15:03.347682 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82256a60-79d8-4b88-8a87-51f49e0b61c3-config-volume" (OuterVolumeSpecName: "config-volume") pod "82256a60-79d8-4b88-8a87-51f49e0b61c3" (UID: "82256a60-79d8-4b88-8a87-51f49e0b61c3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 10:15:03 crc kubenswrapper[4933]: I1201 10:15:03.355152 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82256a60-79d8-4b88-8a87-51f49e0b61c3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "82256a60-79d8-4b88-8a87-51f49e0b61c3" (UID: "82256a60-79d8-4b88-8a87-51f49e0b61c3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:15:03 crc kubenswrapper[4933]: I1201 10:15:03.356791 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82256a60-79d8-4b88-8a87-51f49e0b61c3-kube-api-access-vbngh" (OuterVolumeSpecName: "kube-api-access-vbngh") pod "82256a60-79d8-4b88-8a87-51f49e0b61c3" (UID: "82256a60-79d8-4b88-8a87-51f49e0b61c3"). InnerVolumeSpecName "kube-api-access-vbngh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:15:03 crc kubenswrapper[4933]: I1201 10:15:03.449401 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbngh\" (UniqueName: \"kubernetes.io/projected/82256a60-79d8-4b88-8a87-51f49e0b61c3-kube-api-access-vbngh\") on node \"crc\" DevicePath \"\"" Dec 01 10:15:03 crc kubenswrapper[4933]: I1201 10:15:03.449465 4933 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82256a60-79d8-4b88-8a87-51f49e0b61c3-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 10:15:03 crc kubenswrapper[4933]: I1201 10:15:03.449479 4933 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82256a60-79d8-4b88-8a87-51f49e0b61c3-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 10:15:03 crc kubenswrapper[4933]: I1201 10:15:03.788797 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" event={"ID":"82256a60-79d8-4b88-8a87-51f49e0b61c3","Type":"ContainerDied","Data":"74fd4d93c4a89cb7414f9ee0ed0cf29f2b2247e13af2375e2e50eda98f0d6a74"} Dec 01 10:15:03 crc kubenswrapper[4933]: I1201 10:15:03.788867 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74fd4d93c4a89cb7414f9ee0ed0cf29f2b2247e13af2375e2e50eda98f0d6a74" Dec 01 10:15:03 crc kubenswrapper[4933]: I1201 10:15:03.789447 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409735-chmx7" Dec 01 10:15:04 crc kubenswrapper[4933]: I1201 10:15:04.257984 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l"] Dec 01 10:15:04 crc kubenswrapper[4933]: I1201 10:15:04.267867 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409690-jvl8l"] Dec 01 10:15:05 crc kubenswrapper[4933]: I1201 10:15:05.680044 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3422f33-b5ab-4658-86a0-c908efca7db9" path="/var/lib/kubelet/pods/a3422f33-b5ab-4658-86a0-c908efca7db9/volumes" Dec 01 10:15:07 crc kubenswrapper[4933]: I1201 10:15:07.668725 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:15:07 crc kubenswrapper[4933]: E1201 10:15:07.669618 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:15:19 crc kubenswrapper[4933]: I1201 10:15:19.676192 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:15:19 crc kubenswrapper[4933]: E1201 10:15:19.678211 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:15:32 crc kubenswrapper[4933]: I1201 10:15:32.668264 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:15:32 crc kubenswrapper[4933]: E1201 10:15:32.669552 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:15:43 crc kubenswrapper[4933]: I1201 10:15:43.667979 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:15:43 crc kubenswrapper[4933]: E1201 10:15:43.669046 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:15:49 crc kubenswrapper[4933]: I1201 10:15:49.264467 4933 generic.go:334] "Generic (PLEG): container finished" podID="e9ac33c2-a83f-4ec8-8458-b366a2aebd5d" containerID="64585eec319b65cd1f4fd9005bfb6ff9ad2d8bbafad8c668260c854f2d6155e9" exitCode=0 Dec 01 10:15:49 crc kubenswrapper[4933]: I1201 10:15:49.264568 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" event={"ID":"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d","Type":"ContainerDied","Data":"64585eec319b65cd1f4fd9005bfb6ff9ad2d8bbafad8c668260c854f2d6155e9"} Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.743611 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.864295 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-ssh-key\") pod \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.864385 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-cell1-compute-config-0\") pod \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.864547 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-migration-ssh-key-1\") pod \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.864598 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-migration-ssh-key-0\") pod \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.864650 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-combined-ca-bundle\") pod \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.864688 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99czs\" (UniqueName: \"kubernetes.io/projected/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-kube-api-access-99czs\") pod \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.864713 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-inventory\") pod \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.864776 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-extra-config-0\") pod \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.864843 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-cell1-compute-config-1\") pod \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\" (UID: \"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d\") " Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.872799 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d" (UID: "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.876180 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-kube-api-access-99czs" (OuterVolumeSpecName: "kube-api-access-99czs") pod "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d" (UID: "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d"). InnerVolumeSpecName "kube-api-access-99czs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.897844 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d" (UID: "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.899626 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d" (UID: "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.904756 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-inventory" (OuterVolumeSpecName: "inventory") pod "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d" (UID: "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.905705 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d" (UID: "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.907196 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d" (UID: "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.910900 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d" (UID: "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.911345 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d" (UID: "e9ac33c2-a83f-4ec8-8458-b366a2aebd5d"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.967937 4933 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.967982 4933 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.967991 4933 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.968002 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99czs\" (UniqueName: \"kubernetes.io/projected/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-kube-api-access-99czs\") on node \"crc\" DevicePath \"\"" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.968023 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.968038 4933 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.968050 4933 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.968061 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:15:50 crc kubenswrapper[4933]: I1201 10:15:50.968073 4933 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e9ac33c2-a83f-4ec8-8458-b366a2aebd5d-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.282438 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" event={"ID":"e9ac33c2-a83f-4ec8-8458-b366a2aebd5d","Type":"ContainerDied","Data":"13b3e429cef4ff1dc4c78aa9f00fb93e9cfe0d919dcc869cfff1650a3f4ad976"} Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.282499 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13b3e429cef4ff1dc4c78aa9f00fb93e9cfe0d919dcc869cfff1650a3f4ad976" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.282688 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-69txc" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.400358 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd"] Dec 01 10:15:51 crc kubenswrapper[4933]: E1201 10:15:51.401539 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ac33c2-a83f-4ec8-8458-b366a2aebd5d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.401570 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ac33c2-a83f-4ec8-8458-b366a2aebd5d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 01 10:15:51 crc kubenswrapper[4933]: E1201 10:15:51.401644 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82256a60-79d8-4b88-8a87-51f49e0b61c3" containerName="collect-profiles" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.401655 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="82256a60-79d8-4b88-8a87-51f49e0b61c3" containerName="collect-profiles" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.401942 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ac33c2-a83f-4ec8-8458-b366a2aebd5d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.401978 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="82256a60-79d8-4b88-8a87-51f49e0b61c3" containerName="collect-profiles" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.403234 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.407598 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.407879 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8vmpq" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.408040 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.408194 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.408480 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.413542 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd"] Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.580391 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.580496 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.580567 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.580608 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.580831 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghhhp\" (UniqueName: \"kubernetes.io/projected/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-kube-api-access-ghhhp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.581023 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.581282 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.684136 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.684391 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghhhp\" (UniqueName: \"kubernetes.io/projected/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-kube-api-access-ghhhp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.684441 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.684504 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.684571 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.684600 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.684638 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.690016 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.690268 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.690481 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.691446 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.693039 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.698666 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.707740 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghhhp\" (UniqueName: \"kubernetes.io/projected/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-kube-api-access-ghhhp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxssd\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:51 crc kubenswrapper[4933]: I1201 10:15:51.738499 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:15:52 crc kubenswrapper[4933]: I1201 10:15:52.312814 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd"] Dec 01 10:15:53 crc kubenswrapper[4933]: I1201 10:15:53.302022 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" event={"ID":"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823","Type":"ContainerStarted","Data":"d732284dc35e759451635999a31d2818a4e0370c05f739bbead2afe34cea55c9"} Dec 01 10:15:54 crc kubenswrapper[4933]: I1201 10:15:54.312063 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" event={"ID":"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823","Type":"ContainerStarted","Data":"7da8e8594b3fe80461ebdc1d1df678abad9a61f6a58c7ce834a0d1c7c74bef15"} Dec 01 10:15:54 crc kubenswrapper[4933]: I1201 10:15:54.339925 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" podStartSLOduration=2.540676263 podStartE2EDuration="3.33990223s" podCreationTimestamp="2025-12-01 10:15:51 +0000 UTC" firstStartedPulling="2025-12-01 10:15:52.315093992 +0000 UTC m=+2642.956817607" lastFinishedPulling="2025-12-01 10:15:53.114319959 +0000 UTC m=+2643.756043574" observedRunningTime="2025-12-01 10:15:54.33460419 +0000 UTC m=+2644.976327805" watchObservedRunningTime="2025-12-01 10:15:54.33990223 +0000 UTC m=+2644.981625845" Dec 01 10:15:57 crc kubenswrapper[4933]: I1201 10:15:57.668856 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:15:57 crc kubenswrapper[4933]: E1201 10:15:57.669604 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:15:59 crc kubenswrapper[4933]: I1201 10:15:59.074023 4933 scope.go:117] "RemoveContainer" containerID="a48d1ad0a3c4f2dc67a0ef25a006868e41fb607592f16bc6a8234a203356793e" Dec 01 10:16:09 crc kubenswrapper[4933]: I1201 10:16:09.673812 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:16:09 crc kubenswrapper[4933]: E1201 10:16:09.674707 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:16:23 crc kubenswrapper[4933]: I1201 10:16:23.668010 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:16:23 crc kubenswrapper[4933]: E1201 10:16:23.668954 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:16:37 crc kubenswrapper[4933]: I1201 10:16:37.668929 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:16:37 crc kubenswrapper[4933]: E1201 10:16:37.670088 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:16:49 crc kubenswrapper[4933]: I1201 10:16:49.681631 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:16:50 crc kubenswrapper[4933]: I1201 10:16:50.854667 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"3d2bf94d76a94ee10c5642ca9b82e2c6ccf4d73dabae1332a061736cfbee4c70"} Dec 01 10:18:06 crc kubenswrapper[4933]: I1201 10:18:06.628089 4933 generic.go:334] "Generic (PLEG): container finished" podID="eab7fc1e-f6ce-41a3-9a65-1773b1c2e823" containerID="7da8e8594b3fe80461ebdc1d1df678abad9a61f6a58c7ce834a0d1c7c74bef15" exitCode=0 Dec 01 10:18:06 crc kubenswrapper[4933]: I1201 10:18:06.628142 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" event={"ID":"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823","Type":"ContainerDied","Data":"7da8e8594b3fe80461ebdc1d1df678abad9a61f6a58c7ce834a0d1c7c74bef15"} Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.090479 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.137465 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-inventory\") pod \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.137564 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-0\") pod \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.137613 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-2\") pod \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.137672 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-1\") pod \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.137904 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghhhp\" (UniqueName: \"kubernetes.io/projected/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-kube-api-access-ghhhp\") pod \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.137968 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-telemetry-combined-ca-bundle\") pod \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.138147 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ssh-key\") pod \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\" (UID: \"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823\") " Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.145143 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-kube-api-access-ghhhp" (OuterVolumeSpecName: "kube-api-access-ghhhp") pod "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823" (UID: "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823"). InnerVolumeSpecName "kube-api-access-ghhhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.147289 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823" (UID: "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.175184 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823" (UID: "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.176731 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823" (UID: "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.184416 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-inventory" (OuterVolumeSpecName: "inventory") pod "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823" (UID: "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.186880 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823" (UID: "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.189820 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823" (UID: "eab7fc1e-f6ce-41a3-9a65-1773b1c2e823"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.242136 4933 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.242174 4933 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.242187 4933 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.242197 4933 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.242206 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghhhp\" (UniqueName: \"kubernetes.io/projected/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-kube-api-access-ghhhp\") on node \"crc\" DevicePath \"\"" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.242219 4933 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.242227 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eab7fc1e-f6ce-41a3-9a65-1773b1c2e823-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.649809 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" event={"ID":"eab7fc1e-f6ce-41a3-9a65-1773b1c2e823","Type":"ContainerDied","Data":"d732284dc35e759451635999a31d2818a4e0370c05f739bbead2afe34cea55c9"} Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.650266 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d732284dc35e759451635999a31d2818a4e0370c05f739bbead2afe34cea55c9" Dec 01 10:18:08 crc kubenswrapper[4933]: I1201 10:18:08.649959 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxssd" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.414222 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5rf97"] Dec 01 10:18:46 crc kubenswrapper[4933]: E1201 10:18:46.415413 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eab7fc1e-f6ce-41a3-9a65-1773b1c2e823" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.415429 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab7fc1e-f6ce-41a3-9a65-1773b1c2e823" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.415677 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="eab7fc1e-f6ce-41a3-9a65-1773b1c2e823" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.420128 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.426755 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rf97"] Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.489345 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88snd\" (UniqueName: \"kubernetes.io/projected/9f165f22-cbb0-4573-94cc-5802d53b2453-kube-api-access-88snd\") pod \"redhat-marketplace-5rf97\" (UID: \"9f165f22-cbb0-4573-94cc-5802d53b2453\") " pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.489429 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f165f22-cbb0-4573-94cc-5802d53b2453-utilities\") pod \"redhat-marketplace-5rf97\" (UID: \"9f165f22-cbb0-4573-94cc-5802d53b2453\") " pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.489698 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f165f22-cbb0-4573-94cc-5802d53b2453-catalog-content\") pod \"redhat-marketplace-5rf97\" (UID: \"9f165f22-cbb0-4573-94cc-5802d53b2453\") " pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.591613 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f165f22-cbb0-4573-94cc-5802d53b2453-utilities\") pod \"redhat-marketplace-5rf97\" (UID: \"9f165f22-cbb0-4573-94cc-5802d53b2453\") " pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.591708 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f165f22-cbb0-4573-94cc-5802d53b2453-catalog-content\") pod \"redhat-marketplace-5rf97\" (UID: \"9f165f22-cbb0-4573-94cc-5802d53b2453\") " pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.591805 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88snd\" (UniqueName: \"kubernetes.io/projected/9f165f22-cbb0-4573-94cc-5802d53b2453-kube-api-access-88snd\") pod \"redhat-marketplace-5rf97\" (UID: \"9f165f22-cbb0-4573-94cc-5802d53b2453\") " pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.592405 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f165f22-cbb0-4573-94cc-5802d53b2453-utilities\") pod \"redhat-marketplace-5rf97\" (UID: \"9f165f22-cbb0-4573-94cc-5802d53b2453\") " pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.592493 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f165f22-cbb0-4573-94cc-5802d53b2453-catalog-content\") pod \"redhat-marketplace-5rf97\" (UID: \"9f165f22-cbb0-4573-94cc-5802d53b2453\") " pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.621522 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88snd\" (UniqueName: \"kubernetes.io/projected/9f165f22-cbb0-4573-94cc-5802d53b2453-kube-api-access-88snd\") pod \"redhat-marketplace-5rf97\" (UID: \"9f165f22-cbb0-4573-94cc-5802d53b2453\") " pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:46 crc kubenswrapper[4933]: I1201 10:18:46.747370 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:47 crc kubenswrapper[4933]: I1201 10:18:47.235063 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rf97"] Dec 01 10:18:48 crc kubenswrapper[4933]: I1201 10:18:48.048500 4933 generic.go:334] "Generic (PLEG): container finished" podID="9f165f22-cbb0-4573-94cc-5802d53b2453" containerID="9274871d81a21b3b8ed7622d2e599af88aae8204b98c67b200c5d3da74656eb2" exitCode=0 Dec 01 10:18:48 crc kubenswrapper[4933]: I1201 10:18:48.048551 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rf97" event={"ID":"9f165f22-cbb0-4573-94cc-5802d53b2453","Type":"ContainerDied","Data":"9274871d81a21b3b8ed7622d2e599af88aae8204b98c67b200c5d3da74656eb2"} Dec 01 10:18:48 crc kubenswrapper[4933]: I1201 10:18:48.048588 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rf97" event={"ID":"9f165f22-cbb0-4573-94cc-5802d53b2453","Type":"ContainerStarted","Data":"c6dec6feec43037e08339ff7a14094645cd1b0bb39d3d3daf058812f07751dd5"} Dec 01 10:18:48 crc kubenswrapper[4933]: I1201 10:18:48.052079 4933 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 10:18:49 crc kubenswrapper[4933]: I1201 10:18:49.060532 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rf97" event={"ID":"9f165f22-cbb0-4573-94cc-5802d53b2453","Type":"ContainerStarted","Data":"65760c41d85a070ef0ca3983d32687944eb75c8fef8dcba834fc705412c6226f"} Dec 01 10:18:50 crc kubenswrapper[4933]: I1201 10:18:50.099855 4933 generic.go:334] "Generic (PLEG): container finished" podID="9f165f22-cbb0-4573-94cc-5802d53b2453" containerID="65760c41d85a070ef0ca3983d32687944eb75c8fef8dcba834fc705412c6226f" exitCode=0 Dec 01 10:18:50 crc kubenswrapper[4933]: I1201 10:18:50.099964 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rf97" event={"ID":"9f165f22-cbb0-4573-94cc-5802d53b2453","Type":"ContainerDied","Data":"65760c41d85a070ef0ca3983d32687944eb75c8fef8dcba834fc705412c6226f"} Dec 01 10:18:51 crc kubenswrapper[4933]: I1201 10:18:51.111296 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rf97" event={"ID":"9f165f22-cbb0-4573-94cc-5802d53b2453","Type":"ContainerStarted","Data":"4c11ad98e7ca4d3a8c8e2daf3a6ecec07dea24e3f8b6900437a482147bb10ac7"} Dec 01 10:18:51 crc kubenswrapper[4933]: I1201 10:18:51.137039 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5rf97" podStartSLOduration=2.5734561019999997 podStartE2EDuration="5.137016907s" podCreationTimestamp="2025-12-01 10:18:46 +0000 UTC" firstStartedPulling="2025-12-01 10:18:48.051797403 +0000 UTC m=+2818.693521018" lastFinishedPulling="2025-12-01 10:18:50.615358198 +0000 UTC m=+2821.257081823" observedRunningTime="2025-12-01 10:18:51.129956262 +0000 UTC m=+2821.771679887" watchObservedRunningTime="2025-12-01 10:18:51.137016907 +0000 UTC m=+2821.778740522" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.065400 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.067469 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.072302 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mlp48" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.072776 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.073007 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.074276 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.076274 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.113771 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c272594d-4d61-490a-a44d-0a82106c9a1f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.113921 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jfs7\" (UniqueName: \"kubernetes.io/projected/c272594d-4d61-490a-a44d-0a82106c9a1f-kube-api-access-5jfs7\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.113991 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.114094 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.114166 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.114506 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c272594d-4d61-490a-a44d-0a82106c9a1f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.114542 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c272594d-4d61-490a-a44d-0a82106c9a1f-config-data\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.114580 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c272594d-4d61-490a-a44d-0a82106c9a1f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.114639 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.216015 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c272594d-4d61-490a-a44d-0a82106c9a1f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.216077 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c272594d-4d61-490a-a44d-0a82106c9a1f-config-data\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.216124 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c272594d-4d61-490a-a44d-0a82106c9a1f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.216181 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.216283 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c272594d-4d61-490a-a44d-0a82106c9a1f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.216352 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jfs7\" (UniqueName: \"kubernetes.io/projected/c272594d-4d61-490a-a44d-0a82106c9a1f-kube-api-access-5jfs7\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.216387 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.216463 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.216506 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.216898 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c272594d-4d61-490a-a44d-0a82106c9a1f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.217395 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.217666 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c272594d-4d61-490a-a44d-0a82106c9a1f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.225976 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c272594d-4d61-490a-a44d-0a82106c9a1f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.228069 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c272594d-4d61-490a-a44d-0a82106c9a1f-config-data\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.229586 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.232664 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.233707 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.238655 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jfs7\" (UniqueName: \"kubernetes.io/projected/c272594d-4d61-490a-a44d-0a82106c9a1f-kube-api-access-5jfs7\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.252431 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.389641 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 01 10:18:52 crc kubenswrapper[4933]: I1201 10:18:52.896985 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 01 10:18:53 crc kubenswrapper[4933]: I1201 10:18:53.133438 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c272594d-4d61-490a-a44d-0a82106c9a1f","Type":"ContainerStarted","Data":"559424c60950274e1a7352968eeb26167c406bde0e5f6636ec64c0efe21649ae"} Dec 01 10:18:56 crc kubenswrapper[4933]: I1201 10:18:56.748340 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:56 crc kubenswrapper[4933]: I1201 10:18:56.749010 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:56 crc kubenswrapper[4933]: I1201 10:18:56.810553 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:57 crc kubenswrapper[4933]: I1201 10:18:57.226592 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:18:57 crc kubenswrapper[4933]: I1201 10:18:57.285378 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rf97"] Dec 01 10:18:59 crc kubenswrapper[4933]: I1201 10:18:59.196529 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5rf97" podUID="9f165f22-cbb0-4573-94cc-5802d53b2453" containerName="registry-server" containerID="cri-o://4c11ad98e7ca4d3a8c8e2daf3a6ecec07dea24e3f8b6900437a482147bb10ac7" gracePeriod=2 Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.141511 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.219838 4933 generic.go:334] "Generic (PLEG): container finished" podID="9f165f22-cbb0-4573-94cc-5802d53b2453" containerID="4c11ad98e7ca4d3a8c8e2daf3a6ecec07dea24e3f8b6900437a482147bb10ac7" exitCode=0 Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.219910 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rf97" event={"ID":"9f165f22-cbb0-4573-94cc-5802d53b2453","Type":"ContainerDied","Data":"4c11ad98e7ca4d3a8c8e2daf3a6ecec07dea24e3f8b6900437a482147bb10ac7"} Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.219949 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rf97" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.219982 4933 scope.go:117] "RemoveContainer" containerID="4c11ad98e7ca4d3a8c8e2daf3a6ecec07dea24e3f8b6900437a482147bb10ac7" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.219963 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rf97" event={"ID":"9f165f22-cbb0-4573-94cc-5802d53b2453","Type":"ContainerDied","Data":"c6dec6feec43037e08339ff7a14094645cd1b0bb39d3d3daf058812f07751dd5"} Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.242913 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f165f22-cbb0-4573-94cc-5802d53b2453-catalog-content\") pod \"9f165f22-cbb0-4573-94cc-5802d53b2453\" (UID: \"9f165f22-cbb0-4573-94cc-5802d53b2453\") " Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.243194 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88snd\" (UniqueName: \"kubernetes.io/projected/9f165f22-cbb0-4573-94cc-5802d53b2453-kube-api-access-88snd\") pod \"9f165f22-cbb0-4573-94cc-5802d53b2453\" (UID: \"9f165f22-cbb0-4573-94cc-5802d53b2453\") " Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.243243 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f165f22-cbb0-4573-94cc-5802d53b2453-utilities\") pod \"9f165f22-cbb0-4573-94cc-5802d53b2453\" (UID: \"9f165f22-cbb0-4573-94cc-5802d53b2453\") " Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.244298 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f165f22-cbb0-4573-94cc-5802d53b2453-utilities" (OuterVolumeSpecName: "utilities") pod "9f165f22-cbb0-4573-94cc-5802d53b2453" (UID: "9f165f22-cbb0-4573-94cc-5802d53b2453"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.252803 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f165f22-cbb0-4573-94cc-5802d53b2453-kube-api-access-88snd" (OuterVolumeSpecName: "kube-api-access-88snd") pod "9f165f22-cbb0-4573-94cc-5802d53b2453" (UID: "9f165f22-cbb0-4573-94cc-5802d53b2453"). InnerVolumeSpecName "kube-api-access-88snd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.267244 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f165f22-cbb0-4573-94cc-5802d53b2453-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f165f22-cbb0-4573-94cc-5802d53b2453" (UID: "9f165f22-cbb0-4573-94cc-5802d53b2453"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.307047 4933 scope.go:117] "RemoveContainer" containerID="65760c41d85a070ef0ca3983d32687944eb75c8fef8dcba834fc705412c6226f" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.346113 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88snd\" (UniqueName: \"kubernetes.io/projected/9f165f22-cbb0-4573-94cc-5802d53b2453-kube-api-access-88snd\") on node \"crc\" DevicePath \"\"" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.346183 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f165f22-cbb0-4573-94cc-5802d53b2453-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.346194 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f165f22-cbb0-4573-94cc-5802d53b2453-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.393130 4933 scope.go:117] "RemoveContainer" containerID="9274871d81a21b3b8ed7622d2e599af88aae8204b98c67b200c5d3da74656eb2" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.468329 4933 scope.go:117] "RemoveContainer" containerID="4c11ad98e7ca4d3a8c8e2daf3a6ecec07dea24e3f8b6900437a482147bb10ac7" Dec 01 10:19:01 crc kubenswrapper[4933]: E1201 10:19:01.469068 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c11ad98e7ca4d3a8c8e2daf3a6ecec07dea24e3f8b6900437a482147bb10ac7\": container with ID starting with 4c11ad98e7ca4d3a8c8e2daf3a6ecec07dea24e3f8b6900437a482147bb10ac7 not found: ID does not exist" containerID="4c11ad98e7ca4d3a8c8e2daf3a6ecec07dea24e3f8b6900437a482147bb10ac7" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.469120 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c11ad98e7ca4d3a8c8e2daf3a6ecec07dea24e3f8b6900437a482147bb10ac7"} err="failed to get container status \"4c11ad98e7ca4d3a8c8e2daf3a6ecec07dea24e3f8b6900437a482147bb10ac7\": rpc error: code = NotFound desc = could not find container \"4c11ad98e7ca4d3a8c8e2daf3a6ecec07dea24e3f8b6900437a482147bb10ac7\": container with ID starting with 4c11ad98e7ca4d3a8c8e2daf3a6ecec07dea24e3f8b6900437a482147bb10ac7 not found: ID does not exist" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.469159 4933 scope.go:117] "RemoveContainer" containerID="65760c41d85a070ef0ca3983d32687944eb75c8fef8dcba834fc705412c6226f" Dec 01 10:19:01 crc kubenswrapper[4933]: E1201 10:19:01.469734 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65760c41d85a070ef0ca3983d32687944eb75c8fef8dcba834fc705412c6226f\": container with ID starting with 65760c41d85a070ef0ca3983d32687944eb75c8fef8dcba834fc705412c6226f not found: ID does not exist" containerID="65760c41d85a070ef0ca3983d32687944eb75c8fef8dcba834fc705412c6226f" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.469904 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65760c41d85a070ef0ca3983d32687944eb75c8fef8dcba834fc705412c6226f"} err="failed to get container status \"65760c41d85a070ef0ca3983d32687944eb75c8fef8dcba834fc705412c6226f\": rpc error: code = NotFound desc = could not find container \"65760c41d85a070ef0ca3983d32687944eb75c8fef8dcba834fc705412c6226f\": container with ID starting with 65760c41d85a070ef0ca3983d32687944eb75c8fef8dcba834fc705412c6226f not found: ID does not exist" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.470019 4933 scope.go:117] "RemoveContainer" containerID="9274871d81a21b3b8ed7622d2e599af88aae8204b98c67b200c5d3da74656eb2" Dec 01 10:19:01 crc kubenswrapper[4933]: E1201 10:19:01.470790 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9274871d81a21b3b8ed7622d2e599af88aae8204b98c67b200c5d3da74656eb2\": container with ID starting with 9274871d81a21b3b8ed7622d2e599af88aae8204b98c67b200c5d3da74656eb2 not found: ID does not exist" containerID="9274871d81a21b3b8ed7622d2e599af88aae8204b98c67b200c5d3da74656eb2" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.470841 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9274871d81a21b3b8ed7622d2e599af88aae8204b98c67b200c5d3da74656eb2"} err="failed to get container status \"9274871d81a21b3b8ed7622d2e599af88aae8204b98c67b200c5d3da74656eb2\": rpc error: code = NotFound desc = could not find container \"9274871d81a21b3b8ed7622d2e599af88aae8204b98c67b200c5d3da74656eb2\": container with ID starting with 9274871d81a21b3b8ed7622d2e599af88aae8204b98c67b200c5d3da74656eb2 not found: ID does not exist" Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.567134 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rf97"] Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.579249 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rf97"] Dec 01 10:19:01 crc kubenswrapper[4933]: I1201 10:19:01.688175 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f165f22-cbb0-4573-94cc-5802d53b2453" path="/var/lib/kubelet/pods/9f165f22-cbb0-4573-94cc-5802d53b2453/volumes" Dec 01 10:19:11 crc kubenswrapper[4933]: I1201 10:19:11.741606 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:19:11 crc kubenswrapper[4933]: I1201 10:19:11.742320 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.271387 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bhs6k"] Dec 01 10:19:21 crc kubenswrapper[4933]: E1201 10:19:21.272599 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f165f22-cbb0-4573-94cc-5802d53b2453" containerName="registry-server" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.272612 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f165f22-cbb0-4573-94cc-5802d53b2453" containerName="registry-server" Dec 01 10:19:21 crc kubenswrapper[4933]: E1201 10:19:21.272643 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f165f22-cbb0-4573-94cc-5802d53b2453" containerName="extract-utilities" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.272650 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f165f22-cbb0-4573-94cc-5802d53b2453" containerName="extract-utilities" Dec 01 10:19:21 crc kubenswrapper[4933]: E1201 10:19:21.272663 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f165f22-cbb0-4573-94cc-5802d53b2453" containerName="extract-content" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.272669 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f165f22-cbb0-4573-94cc-5802d53b2453" containerName="extract-content" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.272999 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f165f22-cbb0-4573-94cc-5802d53b2453" containerName="registry-server" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.275213 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.286791 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhs6k"] Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.352104 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582f89cf-052b-4309-b9f1-1824449b09ef-catalog-content\") pod \"certified-operators-bhs6k\" (UID: \"582f89cf-052b-4309-b9f1-1824449b09ef\") " pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.352269 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582f89cf-052b-4309-b9f1-1824449b09ef-utilities\") pod \"certified-operators-bhs6k\" (UID: \"582f89cf-052b-4309-b9f1-1824449b09ef\") " pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.352424 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwzx6\" (UniqueName: \"kubernetes.io/projected/582f89cf-052b-4309-b9f1-1824449b09ef-kube-api-access-wwzx6\") pod \"certified-operators-bhs6k\" (UID: \"582f89cf-052b-4309-b9f1-1824449b09ef\") " pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.454403 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582f89cf-052b-4309-b9f1-1824449b09ef-utilities\") pod \"certified-operators-bhs6k\" (UID: \"582f89cf-052b-4309-b9f1-1824449b09ef\") " pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.454546 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwzx6\" (UniqueName: \"kubernetes.io/projected/582f89cf-052b-4309-b9f1-1824449b09ef-kube-api-access-wwzx6\") pod \"certified-operators-bhs6k\" (UID: \"582f89cf-052b-4309-b9f1-1824449b09ef\") " pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.454575 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582f89cf-052b-4309-b9f1-1824449b09ef-catalog-content\") pod \"certified-operators-bhs6k\" (UID: \"582f89cf-052b-4309-b9f1-1824449b09ef\") " pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.455319 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582f89cf-052b-4309-b9f1-1824449b09ef-utilities\") pod \"certified-operators-bhs6k\" (UID: \"582f89cf-052b-4309-b9f1-1824449b09ef\") " pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.455371 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582f89cf-052b-4309-b9f1-1824449b09ef-catalog-content\") pod \"certified-operators-bhs6k\" (UID: \"582f89cf-052b-4309-b9f1-1824449b09ef\") " pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.475916 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwzx6\" (UniqueName: \"kubernetes.io/projected/582f89cf-052b-4309-b9f1-1824449b09ef-kube-api-access-wwzx6\") pod \"certified-operators-bhs6k\" (UID: \"582f89cf-052b-4309-b9f1-1824449b09ef\") " pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:21 crc kubenswrapper[4933]: I1201 10:19:21.636043 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:32 crc kubenswrapper[4933]: E1201 10:19:32.625858 4933 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 01 10:19:32 crc kubenswrapper[4933]: E1201 10:19:32.626646 4933 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jfs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(c272594d-4d61-490a-a44d-0a82106c9a1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 10:19:32 crc kubenswrapper[4933]: E1201 10:19:32.628021 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="c272594d-4d61-490a-a44d-0a82106c9a1f" Dec 01 10:19:33 crc kubenswrapper[4933]: I1201 10:19:33.011677 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhs6k"] Dec 01 10:19:33 crc kubenswrapper[4933]: I1201 10:19:33.554363 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhs6k" event={"ID":"582f89cf-052b-4309-b9f1-1824449b09ef","Type":"ContainerStarted","Data":"739d32bb162c3d95fd3d8715c560093a6e18e284213b7ffae43cd4edbcd11bb6"} Dec 01 10:19:33 crc kubenswrapper[4933]: I1201 10:19:33.554845 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhs6k" event={"ID":"582f89cf-052b-4309-b9f1-1824449b09ef","Type":"ContainerStarted","Data":"0b6d7d28ae5dfda2845322f0812d6c95df53d7a50a4c6285ab11c8ea54d19edd"} Dec 01 10:19:33 crc kubenswrapper[4933]: E1201 10:19:33.556296 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="c272594d-4d61-490a-a44d-0a82106c9a1f" Dec 01 10:19:34 crc kubenswrapper[4933]: I1201 10:19:34.591343 4933 generic.go:334] "Generic (PLEG): container finished" podID="582f89cf-052b-4309-b9f1-1824449b09ef" containerID="739d32bb162c3d95fd3d8715c560093a6e18e284213b7ffae43cd4edbcd11bb6" exitCode=0 Dec 01 10:19:34 crc kubenswrapper[4933]: I1201 10:19:34.591404 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhs6k" event={"ID":"582f89cf-052b-4309-b9f1-1824449b09ef","Type":"ContainerDied","Data":"739d32bb162c3d95fd3d8715c560093a6e18e284213b7ffae43cd4edbcd11bb6"} Dec 01 10:19:36 crc kubenswrapper[4933]: I1201 10:19:36.615684 4933 generic.go:334] "Generic (PLEG): container finished" podID="582f89cf-052b-4309-b9f1-1824449b09ef" containerID="05b0989ca41a1aeddf44d5c637a67a7bdf50616e737a2f3f70c05cb187a96f01" exitCode=0 Dec 01 10:19:36 crc kubenswrapper[4933]: I1201 10:19:36.615787 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhs6k" event={"ID":"582f89cf-052b-4309-b9f1-1824449b09ef","Type":"ContainerDied","Data":"05b0989ca41a1aeddf44d5c637a67a7bdf50616e737a2f3f70c05cb187a96f01"} Dec 01 10:19:37 crc kubenswrapper[4933]: I1201 10:19:37.632220 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhs6k" event={"ID":"582f89cf-052b-4309-b9f1-1824449b09ef","Type":"ContainerStarted","Data":"f16711c6f7736514affb11f11612f8cce4d409fa0340ea080fab521865d8516e"} Dec 01 10:19:37 crc kubenswrapper[4933]: I1201 10:19:37.696174 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bhs6k" podStartSLOduration=14.255679546 podStartE2EDuration="16.696148799s" podCreationTimestamp="2025-12-01 10:19:21 +0000 UTC" firstStartedPulling="2025-12-01 10:19:34.594452958 +0000 UTC m=+2865.236176583" lastFinishedPulling="2025-12-01 10:19:37.034922221 +0000 UTC m=+2867.676645836" observedRunningTime="2025-12-01 10:19:37.675455999 +0000 UTC m=+2868.317179624" watchObservedRunningTime="2025-12-01 10:19:37.696148799 +0000 UTC m=+2868.337872414" Dec 01 10:19:41 crc kubenswrapper[4933]: I1201 10:19:41.636400 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:41 crc kubenswrapper[4933]: I1201 10:19:41.636932 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:41 crc kubenswrapper[4933]: I1201 10:19:41.696015 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:41 crc kubenswrapper[4933]: I1201 10:19:41.741519 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:19:41 crc kubenswrapper[4933]: I1201 10:19:41.741934 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:19:47 crc kubenswrapper[4933]: I1201 10:19:47.770792 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c272594d-4d61-490a-a44d-0a82106c9a1f","Type":"ContainerStarted","Data":"3ac48c569ba74871b3f956d4f2a4a2b4c9b19baa8f233bdf504f6b93908840b2"} Dec 01 10:19:47 crc kubenswrapper[4933]: I1201 10:19:47.793032 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.119214416 podStartE2EDuration="56.793009227s" podCreationTimestamp="2025-12-01 10:18:51 +0000 UTC" firstStartedPulling="2025-12-01 10:18:52.905931196 +0000 UTC m=+2823.547654811" lastFinishedPulling="2025-12-01 10:19:46.579726017 +0000 UTC m=+2877.221449622" observedRunningTime="2025-12-01 10:19:47.791391417 +0000 UTC m=+2878.433115032" watchObservedRunningTime="2025-12-01 10:19:47.793009227 +0000 UTC m=+2878.434732842" Dec 01 10:19:51 crc kubenswrapper[4933]: I1201 10:19:51.698691 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:51 crc kubenswrapper[4933]: I1201 10:19:51.752007 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhs6k"] Dec 01 10:19:51 crc kubenswrapper[4933]: I1201 10:19:51.813116 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bhs6k" podUID="582f89cf-052b-4309-b9f1-1824449b09ef" containerName="registry-server" containerID="cri-o://f16711c6f7736514affb11f11612f8cce4d409fa0340ea080fab521865d8516e" gracePeriod=2 Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.276335 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.299685 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwzx6\" (UniqueName: \"kubernetes.io/projected/582f89cf-052b-4309-b9f1-1824449b09ef-kube-api-access-wwzx6\") pod \"582f89cf-052b-4309-b9f1-1824449b09ef\" (UID: \"582f89cf-052b-4309-b9f1-1824449b09ef\") " Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.299862 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582f89cf-052b-4309-b9f1-1824449b09ef-utilities\") pod \"582f89cf-052b-4309-b9f1-1824449b09ef\" (UID: \"582f89cf-052b-4309-b9f1-1824449b09ef\") " Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.299950 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582f89cf-052b-4309-b9f1-1824449b09ef-catalog-content\") pod \"582f89cf-052b-4309-b9f1-1824449b09ef\" (UID: \"582f89cf-052b-4309-b9f1-1824449b09ef\") " Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.300720 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/582f89cf-052b-4309-b9f1-1824449b09ef-utilities" (OuterVolumeSpecName: "utilities") pod "582f89cf-052b-4309-b9f1-1824449b09ef" (UID: "582f89cf-052b-4309-b9f1-1824449b09ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.338131 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582f89cf-052b-4309-b9f1-1824449b09ef-kube-api-access-wwzx6" (OuterVolumeSpecName: "kube-api-access-wwzx6") pod "582f89cf-052b-4309-b9f1-1824449b09ef" (UID: "582f89cf-052b-4309-b9f1-1824449b09ef"). InnerVolumeSpecName "kube-api-access-wwzx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.364699 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/582f89cf-052b-4309-b9f1-1824449b09ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "582f89cf-052b-4309-b9f1-1824449b09ef" (UID: "582f89cf-052b-4309-b9f1-1824449b09ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.401342 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582f89cf-052b-4309-b9f1-1824449b09ef-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.401389 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582f89cf-052b-4309-b9f1-1824449b09ef-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.401400 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwzx6\" (UniqueName: \"kubernetes.io/projected/582f89cf-052b-4309-b9f1-1824449b09ef-kube-api-access-wwzx6\") on node \"crc\" DevicePath \"\"" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.826340 4933 generic.go:334] "Generic (PLEG): container finished" podID="582f89cf-052b-4309-b9f1-1824449b09ef" containerID="f16711c6f7736514affb11f11612f8cce4d409fa0340ea080fab521865d8516e" exitCode=0 Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.826392 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhs6k" event={"ID":"582f89cf-052b-4309-b9f1-1824449b09ef","Type":"ContainerDied","Data":"f16711c6f7736514affb11f11612f8cce4d409fa0340ea080fab521865d8516e"} Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.826437 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhs6k" event={"ID":"582f89cf-052b-4309-b9f1-1824449b09ef","Type":"ContainerDied","Data":"0b6d7d28ae5dfda2845322f0812d6c95df53d7a50a4c6285ab11c8ea54d19edd"} Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.826448 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhs6k" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.826457 4933 scope.go:117] "RemoveContainer" containerID="f16711c6f7736514affb11f11612f8cce4d409fa0340ea080fab521865d8516e" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.859379 4933 scope.go:117] "RemoveContainer" containerID="05b0989ca41a1aeddf44d5c637a67a7bdf50616e737a2f3f70c05cb187a96f01" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.867920 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhs6k"] Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.876168 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bhs6k"] Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.895228 4933 scope.go:117] "RemoveContainer" containerID="739d32bb162c3d95fd3d8715c560093a6e18e284213b7ffae43cd4edbcd11bb6" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.923940 4933 scope.go:117] "RemoveContainer" containerID="f16711c6f7736514affb11f11612f8cce4d409fa0340ea080fab521865d8516e" Dec 01 10:19:52 crc kubenswrapper[4933]: E1201 10:19:52.924450 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16711c6f7736514affb11f11612f8cce4d409fa0340ea080fab521865d8516e\": container with ID starting with f16711c6f7736514affb11f11612f8cce4d409fa0340ea080fab521865d8516e not found: ID does not exist" containerID="f16711c6f7736514affb11f11612f8cce4d409fa0340ea080fab521865d8516e" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.924502 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16711c6f7736514affb11f11612f8cce4d409fa0340ea080fab521865d8516e"} err="failed to get container status \"f16711c6f7736514affb11f11612f8cce4d409fa0340ea080fab521865d8516e\": rpc error: code = NotFound desc = could not find container \"f16711c6f7736514affb11f11612f8cce4d409fa0340ea080fab521865d8516e\": container with ID starting with f16711c6f7736514affb11f11612f8cce4d409fa0340ea080fab521865d8516e not found: ID does not exist" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.924530 4933 scope.go:117] "RemoveContainer" containerID="05b0989ca41a1aeddf44d5c637a67a7bdf50616e737a2f3f70c05cb187a96f01" Dec 01 10:19:52 crc kubenswrapper[4933]: E1201 10:19:52.924878 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05b0989ca41a1aeddf44d5c637a67a7bdf50616e737a2f3f70c05cb187a96f01\": container with ID starting with 05b0989ca41a1aeddf44d5c637a67a7bdf50616e737a2f3f70c05cb187a96f01 not found: ID does not exist" containerID="05b0989ca41a1aeddf44d5c637a67a7bdf50616e737a2f3f70c05cb187a96f01" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.924901 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05b0989ca41a1aeddf44d5c637a67a7bdf50616e737a2f3f70c05cb187a96f01"} err="failed to get container status \"05b0989ca41a1aeddf44d5c637a67a7bdf50616e737a2f3f70c05cb187a96f01\": rpc error: code = NotFound desc = could not find container \"05b0989ca41a1aeddf44d5c637a67a7bdf50616e737a2f3f70c05cb187a96f01\": container with ID starting with 05b0989ca41a1aeddf44d5c637a67a7bdf50616e737a2f3f70c05cb187a96f01 not found: ID does not exist" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.924913 4933 scope.go:117] "RemoveContainer" containerID="739d32bb162c3d95fd3d8715c560093a6e18e284213b7ffae43cd4edbcd11bb6" Dec 01 10:19:52 crc kubenswrapper[4933]: E1201 10:19:52.925242 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"739d32bb162c3d95fd3d8715c560093a6e18e284213b7ffae43cd4edbcd11bb6\": container with ID starting with 739d32bb162c3d95fd3d8715c560093a6e18e284213b7ffae43cd4edbcd11bb6 not found: ID does not exist" containerID="739d32bb162c3d95fd3d8715c560093a6e18e284213b7ffae43cd4edbcd11bb6" Dec 01 10:19:52 crc kubenswrapper[4933]: I1201 10:19:52.925275 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"739d32bb162c3d95fd3d8715c560093a6e18e284213b7ffae43cd4edbcd11bb6"} err="failed to get container status \"739d32bb162c3d95fd3d8715c560093a6e18e284213b7ffae43cd4edbcd11bb6\": rpc error: code = NotFound desc = could not find container \"739d32bb162c3d95fd3d8715c560093a6e18e284213b7ffae43cd4edbcd11bb6\": container with ID starting with 739d32bb162c3d95fd3d8715c560093a6e18e284213b7ffae43cd4edbcd11bb6 not found: ID does not exist" Dec 01 10:19:53 crc kubenswrapper[4933]: I1201 10:19:53.680816 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582f89cf-052b-4309-b9f1-1824449b09ef" path="/var/lib/kubelet/pods/582f89cf-052b-4309-b9f1-1824449b09ef/volumes" Dec 01 10:20:11 crc kubenswrapper[4933]: I1201 10:20:11.741257 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:20:11 crc kubenswrapper[4933]: I1201 10:20:11.742076 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:20:11 crc kubenswrapper[4933]: I1201 10:20:11.742134 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 10:20:11 crc kubenswrapper[4933]: I1201 10:20:11.743090 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d2bf94d76a94ee10c5642ca9b82e2c6ccf4d73dabae1332a061736cfbee4c70"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 10:20:11 crc kubenswrapper[4933]: I1201 10:20:11.743153 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://3d2bf94d76a94ee10c5642ca9b82e2c6ccf4d73dabae1332a061736cfbee4c70" gracePeriod=600 Dec 01 10:20:12 crc kubenswrapper[4933]: I1201 10:20:12.039277 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="3d2bf94d76a94ee10c5642ca9b82e2c6ccf4d73dabae1332a061736cfbee4c70" exitCode=0 Dec 01 10:20:12 crc kubenswrapper[4933]: I1201 10:20:12.039362 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"3d2bf94d76a94ee10c5642ca9b82e2c6ccf4d73dabae1332a061736cfbee4c70"} Dec 01 10:20:12 crc kubenswrapper[4933]: I1201 10:20:12.039659 4933 scope.go:117] "RemoveContainer" containerID="009871c3ea3491fa0f84b21e365119fdda1440360fa25615a7bbc87d45f77907" Dec 01 10:20:13 crc kubenswrapper[4933]: I1201 10:20:13.053249 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae"} Dec 01 10:21:52 crc kubenswrapper[4933]: I1201 10:21:52.851325 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4w7n9"] Dec 01 10:21:52 crc kubenswrapper[4933]: E1201 10:21:52.854860 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582f89cf-052b-4309-b9f1-1824449b09ef" containerName="extract-utilities" Dec 01 10:21:52 crc kubenswrapper[4933]: I1201 10:21:52.854984 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="582f89cf-052b-4309-b9f1-1824449b09ef" containerName="extract-utilities" Dec 01 10:21:52 crc kubenswrapper[4933]: E1201 10:21:52.855073 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582f89cf-052b-4309-b9f1-1824449b09ef" containerName="extract-content" Dec 01 10:21:52 crc kubenswrapper[4933]: I1201 10:21:52.855135 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="582f89cf-052b-4309-b9f1-1824449b09ef" containerName="extract-content" Dec 01 10:21:52 crc kubenswrapper[4933]: E1201 10:21:52.855209 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582f89cf-052b-4309-b9f1-1824449b09ef" containerName="registry-server" Dec 01 10:21:52 crc kubenswrapper[4933]: I1201 10:21:52.855284 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="582f89cf-052b-4309-b9f1-1824449b09ef" containerName="registry-server" Dec 01 10:21:52 crc kubenswrapper[4933]: I1201 10:21:52.855662 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="582f89cf-052b-4309-b9f1-1824449b09ef" containerName="registry-server" Dec 01 10:21:52 crc kubenswrapper[4933]: I1201 10:21:52.857634 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:21:52 crc kubenswrapper[4933]: I1201 10:21:52.866410 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4w7n9"] Dec 01 10:21:52 crc kubenswrapper[4933]: I1201 10:21:52.978204 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x99t9\" (UniqueName: \"kubernetes.io/projected/56c63d65-3e36-4df8-9e8c-96a87d3a40d4-kube-api-access-x99t9\") pod \"redhat-operators-4w7n9\" (UID: \"56c63d65-3e36-4df8-9e8c-96a87d3a40d4\") " pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:21:52 crc kubenswrapper[4933]: I1201 10:21:52.978278 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56c63d65-3e36-4df8-9e8c-96a87d3a40d4-catalog-content\") pod \"redhat-operators-4w7n9\" (UID: \"56c63d65-3e36-4df8-9e8c-96a87d3a40d4\") " pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:21:52 crc kubenswrapper[4933]: I1201 10:21:52.978402 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56c63d65-3e36-4df8-9e8c-96a87d3a40d4-utilities\") pod \"redhat-operators-4w7n9\" (UID: \"56c63d65-3e36-4df8-9e8c-96a87d3a40d4\") " pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:21:53 crc kubenswrapper[4933]: I1201 10:21:53.080038 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x99t9\" (UniqueName: \"kubernetes.io/projected/56c63d65-3e36-4df8-9e8c-96a87d3a40d4-kube-api-access-x99t9\") pod \"redhat-operators-4w7n9\" (UID: \"56c63d65-3e36-4df8-9e8c-96a87d3a40d4\") " pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:21:53 crc kubenswrapper[4933]: I1201 10:21:53.080104 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56c63d65-3e36-4df8-9e8c-96a87d3a40d4-catalog-content\") pod \"redhat-operators-4w7n9\" (UID: \"56c63d65-3e36-4df8-9e8c-96a87d3a40d4\") " pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:21:53 crc kubenswrapper[4933]: I1201 10:21:53.080176 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56c63d65-3e36-4df8-9e8c-96a87d3a40d4-utilities\") pod \"redhat-operators-4w7n9\" (UID: \"56c63d65-3e36-4df8-9e8c-96a87d3a40d4\") " pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:21:53 crc kubenswrapper[4933]: I1201 10:21:53.080809 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56c63d65-3e36-4df8-9e8c-96a87d3a40d4-utilities\") pod \"redhat-operators-4w7n9\" (UID: \"56c63d65-3e36-4df8-9e8c-96a87d3a40d4\") " pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:21:53 crc kubenswrapper[4933]: I1201 10:21:53.081045 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56c63d65-3e36-4df8-9e8c-96a87d3a40d4-catalog-content\") pod \"redhat-operators-4w7n9\" (UID: \"56c63d65-3e36-4df8-9e8c-96a87d3a40d4\") " pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:21:53 crc kubenswrapper[4933]: I1201 10:21:53.105197 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x99t9\" (UniqueName: \"kubernetes.io/projected/56c63d65-3e36-4df8-9e8c-96a87d3a40d4-kube-api-access-x99t9\") pod \"redhat-operators-4w7n9\" (UID: \"56c63d65-3e36-4df8-9e8c-96a87d3a40d4\") " pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:21:53 crc kubenswrapper[4933]: I1201 10:21:53.231941 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:21:53 crc kubenswrapper[4933]: I1201 10:21:53.725538 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4w7n9"] Dec 01 10:21:54 crc kubenswrapper[4933]: I1201 10:21:54.162518 4933 generic.go:334] "Generic (PLEG): container finished" podID="56c63d65-3e36-4df8-9e8c-96a87d3a40d4" containerID="9d66a2b111d552e759c8cbc6649fbac65d6ef7b40cfee4b5dbf00b9b3b1e069d" exitCode=0 Dec 01 10:21:54 crc kubenswrapper[4933]: I1201 10:21:54.162827 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w7n9" event={"ID":"56c63d65-3e36-4df8-9e8c-96a87d3a40d4","Type":"ContainerDied","Data":"9d66a2b111d552e759c8cbc6649fbac65d6ef7b40cfee4b5dbf00b9b3b1e069d"} Dec 01 10:21:54 crc kubenswrapper[4933]: I1201 10:21:54.162916 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w7n9" event={"ID":"56c63d65-3e36-4df8-9e8c-96a87d3a40d4","Type":"ContainerStarted","Data":"f938ba08da692310d1ca4efd27aa8361714e29bb8e85b26ba5603ac638cdf633"} Dec 01 10:22:05 crc kubenswrapper[4933]: I1201 10:22:05.297287 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w7n9" event={"ID":"56c63d65-3e36-4df8-9e8c-96a87d3a40d4","Type":"ContainerStarted","Data":"20bca2e67cc1564e941a82caa7ad3dbf692eff0f5a6dd38f25cd356c75ef9ccc"} Dec 01 10:22:08 crc kubenswrapper[4933]: I1201 10:22:08.327280 4933 generic.go:334] "Generic (PLEG): container finished" podID="56c63d65-3e36-4df8-9e8c-96a87d3a40d4" containerID="20bca2e67cc1564e941a82caa7ad3dbf692eff0f5a6dd38f25cd356c75ef9ccc" exitCode=0 Dec 01 10:22:08 crc kubenswrapper[4933]: I1201 10:22:08.327360 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w7n9" event={"ID":"56c63d65-3e36-4df8-9e8c-96a87d3a40d4","Type":"ContainerDied","Data":"20bca2e67cc1564e941a82caa7ad3dbf692eff0f5a6dd38f25cd356c75ef9ccc"} Dec 01 10:22:10 crc kubenswrapper[4933]: I1201 10:22:10.365883 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w7n9" event={"ID":"56c63d65-3e36-4df8-9e8c-96a87d3a40d4","Type":"ContainerStarted","Data":"2893c73e4a23aee75d3e023f9df87d808849825f7c719851c9db0059f6ed92ad"} Dec 01 10:22:13 crc kubenswrapper[4933]: I1201 10:22:13.232556 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:22:13 crc kubenswrapper[4933]: I1201 10:22:13.233079 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:22:14 crc kubenswrapper[4933]: I1201 10:22:14.292171 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4w7n9" podUID="56c63d65-3e36-4df8-9e8c-96a87d3a40d4" containerName="registry-server" probeResult="failure" output=< Dec 01 10:22:14 crc kubenswrapper[4933]: timeout: failed to connect service ":50051" within 1s Dec 01 10:22:14 crc kubenswrapper[4933]: > Dec 01 10:22:23 crc kubenswrapper[4933]: I1201 10:22:23.291046 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:22:23 crc kubenswrapper[4933]: I1201 10:22:23.335606 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4w7n9" podStartSLOduration=16.398258483 podStartE2EDuration="31.335573811s" podCreationTimestamp="2025-12-01 10:21:52 +0000 UTC" firstStartedPulling="2025-12-01 10:21:54.165826463 +0000 UTC m=+3004.807550078" lastFinishedPulling="2025-12-01 10:22:09.103141791 +0000 UTC m=+3019.744865406" observedRunningTime="2025-12-01 10:22:10.385699579 +0000 UTC m=+3021.027423184" watchObservedRunningTime="2025-12-01 10:22:23.335573811 +0000 UTC m=+3033.977297426" Dec 01 10:22:23 crc kubenswrapper[4933]: I1201 10:22:23.347305 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4w7n9" Dec 01 10:22:23 crc kubenswrapper[4933]: I1201 10:22:23.876442 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4w7n9"] Dec 01 10:22:24 crc kubenswrapper[4933]: I1201 10:22:24.059254 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p6ldj"] Dec 01 10:22:24 crc kubenswrapper[4933]: I1201 10:22:24.059586 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p6ldj" podUID="d11ee476-c470-44a2-8570-ebc6f5893bdb" containerName="registry-server" containerID="cri-o://5c18450c4a829469afa65382e31f06905b92fd129250cc074a8f2589e55bc401" gracePeriod=2 Dec 01 10:22:24 crc kubenswrapper[4933]: E1201 10:22:24.510432 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c18450c4a829469afa65382e31f06905b92fd129250cc074a8f2589e55bc401 is running failed: container process not found" containerID="5c18450c4a829469afa65382e31f06905b92fd129250cc074a8f2589e55bc401" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 10:22:24 crc kubenswrapper[4933]: E1201 10:22:24.511186 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c18450c4a829469afa65382e31f06905b92fd129250cc074a8f2589e55bc401 is running failed: container process not found" containerID="5c18450c4a829469afa65382e31f06905b92fd129250cc074a8f2589e55bc401" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 10:22:24 crc kubenswrapper[4933]: E1201 10:22:24.511588 4933 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c18450c4a829469afa65382e31f06905b92fd129250cc074a8f2589e55bc401 is running failed: container process not found" containerID="5c18450c4a829469afa65382e31f06905b92fd129250cc074a8f2589e55bc401" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 10:22:24 crc kubenswrapper[4933]: E1201 10:22:24.511617 4933 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c18450c4a829469afa65382e31f06905b92fd129250cc074a8f2589e55bc401 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-p6ldj" podUID="d11ee476-c470-44a2-8570-ebc6f5893bdb" containerName="registry-server" Dec 01 10:22:24 crc kubenswrapper[4933]: I1201 10:22:24.549908 4933 generic.go:334] "Generic (PLEG): container finished" podID="d11ee476-c470-44a2-8570-ebc6f5893bdb" containerID="5c18450c4a829469afa65382e31f06905b92fd129250cc074a8f2589e55bc401" exitCode=0 Dec 01 10:22:24 crc kubenswrapper[4933]: I1201 10:22:24.550953 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6ldj" event={"ID":"d11ee476-c470-44a2-8570-ebc6f5893bdb","Type":"ContainerDied","Data":"5c18450c4a829469afa65382e31f06905b92fd129250cc074a8f2589e55bc401"} Dec 01 10:22:24 crc kubenswrapper[4933]: I1201 10:22:24.807755 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:22:24 crc kubenswrapper[4933]: I1201 10:22:24.969214 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11ee476-c470-44a2-8570-ebc6f5893bdb-utilities\") pod \"d11ee476-c470-44a2-8570-ebc6f5893bdb\" (UID: \"d11ee476-c470-44a2-8570-ebc6f5893bdb\") " Dec 01 10:22:24 crc kubenswrapper[4933]: I1201 10:22:24.969294 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzjmh\" (UniqueName: \"kubernetes.io/projected/d11ee476-c470-44a2-8570-ebc6f5893bdb-kube-api-access-gzjmh\") pod \"d11ee476-c470-44a2-8570-ebc6f5893bdb\" (UID: \"d11ee476-c470-44a2-8570-ebc6f5893bdb\") " Dec 01 10:22:24 crc kubenswrapper[4933]: I1201 10:22:24.969824 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11ee476-c470-44a2-8570-ebc6f5893bdb-catalog-content\") pod \"d11ee476-c470-44a2-8570-ebc6f5893bdb\" (UID: \"d11ee476-c470-44a2-8570-ebc6f5893bdb\") " Dec 01 10:22:24 crc kubenswrapper[4933]: I1201 10:22:24.969831 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11ee476-c470-44a2-8570-ebc6f5893bdb-utilities" (OuterVolumeSpecName: "utilities") pod "d11ee476-c470-44a2-8570-ebc6f5893bdb" (UID: "d11ee476-c470-44a2-8570-ebc6f5893bdb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:22:24 crc kubenswrapper[4933]: I1201 10:22:24.970751 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11ee476-c470-44a2-8570-ebc6f5893bdb-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:22:24 crc kubenswrapper[4933]: I1201 10:22:24.977828 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11ee476-c470-44a2-8570-ebc6f5893bdb-kube-api-access-gzjmh" (OuterVolumeSpecName: "kube-api-access-gzjmh") pod "d11ee476-c470-44a2-8570-ebc6f5893bdb" (UID: "d11ee476-c470-44a2-8570-ebc6f5893bdb"). InnerVolumeSpecName "kube-api-access-gzjmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:22:25 crc kubenswrapper[4933]: I1201 10:22:25.065807 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11ee476-c470-44a2-8570-ebc6f5893bdb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d11ee476-c470-44a2-8570-ebc6f5893bdb" (UID: "d11ee476-c470-44a2-8570-ebc6f5893bdb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:22:25 crc kubenswrapper[4933]: I1201 10:22:25.073356 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11ee476-c470-44a2-8570-ebc6f5893bdb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:22:25 crc kubenswrapper[4933]: I1201 10:22:25.073403 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzjmh\" (UniqueName: \"kubernetes.io/projected/d11ee476-c470-44a2-8570-ebc6f5893bdb-kube-api-access-gzjmh\") on node \"crc\" DevicePath \"\"" Dec 01 10:22:25 crc kubenswrapper[4933]: I1201 10:22:25.563804 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6ldj" event={"ID":"d11ee476-c470-44a2-8570-ebc6f5893bdb","Type":"ContainerDied","Data":"ed64d8e1469ef13ab09be35dc7d8c369fdb22899b75e30b7264e8a24da5ba3ae"} Dec 01 10:22:25 crc kubenswrapper[4933]: I1201 10:22:25.563867 4933 scope.go:117] "RemoveContainer" containerID="5c18450c4a829469afa65382e31f06905b92fd129250cc074a8f2589e55bc401" Dec 01 10:22:25 crc kubenswrapper[4933]: I1201 10:22:25.563942 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6ldj" Dec 01 10:22:25 crc kubenswrapper[4933]: I1201 10:22:25.588416 4933 scope.go:117] "RemoveContainer" containerID="173d0379e7bcd42383b9aa25e92103444c2a6f5f81df61539c7457fa5fe9f447" Dec 01 10:22:25 crc kubenswrapper[4933]: I1201 10:22:25.606624 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p6ldj"] Dec 01 10:22:25 crc kubenswrapper[4933]: I1201 10:22:25.615758 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p6ldj"] Dec 01 10:22:25 crc kubenswrapper[4933]: I1201 10:22:25.640184 4933 scope.go:117] "RemoveContainer" containerID="1b9872314a2e500f673b29f8b650e7d5e64277f681b7f76057217a9fd1924baf" Dec 01 10:22:25 crc kubenswrapper[4933]: I1201 10:22:25.686144 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11ee476-c470-44a2-8570-ebc6f5893bdb" path="/var/lib/kubelet/pods/d11ee476-c470-44a2-8570-ebc6f5893bdb/volumes" Dec 01 10:22:41 crc kubenswrapper[4933]: I1201 10:22:41.741419 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:22:41 crc kubenswrapper[4933]: I1201 10:22:41.742426 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:23:11 crc kubenswrapper[4933]: I1201 10:23:11.741470 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:23:11 crc kubenswrapper[4933]: I1201 10:23:11.742139 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:23:41 crc kubenswrapper[4933]: I1201 10:23:41.741114 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:23:41 crc kubenswrapper[4933]: I1201 10:23:41.741919 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:23:41 crc kubenswrapper[4933]: I1201 10:23:41.741969 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 10:23:41 crc kubenswrapper[4933]: I1201 10:23:41.742998 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 10:23:41 crc kubenswrapper[4933]: I1201 10:23:41.743066 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" gracePeriod=600 Dec 01 10:23:41 crc kubenswrapper[4933]: E1201 10:23:41.899011 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:23:42 crc kubenswrapper[4933]: I1201 10:23:42.424648 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" exitCode=0 Dec 01 10:23:42 crc kubenswrapper[4933]: I1201 10:23:42.424699 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae"} Dec 01 10:23:42 crc kubenswrapper[4933]: I1201 10:23:42.424738 4933 scope.go:117] "RemoveContainer" containerID="3d2bf94d76a94ee10c5642ca9b82e2c6ccf4d73dabae1332a061736cfbee4c70" Dec 01 10:23:42 crc kubenswrapper[4933]: I1201 10:23:42.425508 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:23:42 crc kubenswrapper[4933]: E1201 10:23:42.425805 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:23:55 crc kubenswrapper[4933]: I1201 10:23:55.669174 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:23:55 crc kubenswrapper[4933]: E1201 10:23:55.670232 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:23:55 crc kubenswrapper[4933]: I1201 10:23:55.872292 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gpq4t"] Dec 01 10:23:55 crc kubenswrapper[4933]: E1201 10:23:55.873075 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11ee476-c470-44a2-8570-ebc6f5893bdb" containerName="registry-server" Dec 01 10:23:55 crc kubenswrapper[4933]: I1201 10:23:55.873090 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11ee476-c470-44a2-8570-ebc6f5893bdb" containerName="registry-server" Dec 01 10:23:55 crc kubenswrapper[4933]: E1201 10:23:55.873124 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11ee476-c470-44a2-8570-ebc6f5893bdb" containerName="extract-utilities" Dec 01 10:23:55 crc kubenswrapper[4933]: I1201 10:23:55.873131 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11ee476-c470-44a2-8570-ebc6f5893bdb" containerName="extract-utilities" Dec 01 10:23:55 crc kubenswrapper[4933]: E1201 10:23:55.873156 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11ee476-c470-44a2-8570-ebc6f5893bdb" containerName="extract-content" Dec 01 10:23:55 crc kubenswrapper[4933]: I1201 10:23:55.873166 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11ee476-c470-44a2-8570-ebc6f5893bdb" containerName="extract-content" Dec 01 10:23:55 crc kubenswrapper[4933]: I1201 10:23:55.873444 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11ee476-c470-44a2-8570-ebc6f5893bdb" containerName="registry-server" Dec 01 10:23:55 crc kubenswrapper[4933]: I1201 10:23:55.874853 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:23:55 crc kubenswrapper[4933]: I1201 10:23:55.894698 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gpq4t"] Dec 01 10:23:55 crc kubenswrapper[4933]: I1201 10:23:55.897498 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5754f16-70b2-4f07-96ae-5233861175bb-catalog-content\") pod \"community-operators-gpq4t\" (UID: \"a5754f16-70b2-4f07-96ae-5233861175bb\") " pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:23:55 crc kubenswrapper[4933]: I1201 10:23:55.897593 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5754f16-70b2-4f07-96ae-5233861175bb-utilities\") pod \"community-operators-gpq4t\" (UID: \"a5754f16-70b2-4f07-96ae-5233861175bb\") " pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:23:55 crc kubenswrapper[4933]: I1201 10:23:55.897826 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lh87\" (UniqueName: \"kubernetes.io/projected/a5754f16-70b2-4f07-96ae-5233861175bb-kube-api-access-7lh87\") pod \"community-operators-gpq4t\" (UID: \"a5754f16-70b2-4f07-96ae-5233861175bb\") " pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:23:55 crc kubenswrapper[4933]: I1201 10:23:55.999703 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5754f16-70b2-4f07-96ae-5233861175bb-catalog-content\") pod \"community-operators-gpq4t\" (UID: \"a5754f16-70b2-4f07-96ae-5233861175bb\") " pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:23:55 crc kubenswrapper[4933]: I1201 10:23:55.999851 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5754f16-70b2-4f07-96ae-5233861175bb-utilities\") pod \"community-operators-gpq4t\" (UID: \"a5754f16-70b2-4f07-96ae-5233861175bb\") " pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:23:56 crc kubenswrapper[4933]: I1201 10:23:56.000064 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lh87\" (UniqueName: \"kubernetes.io/projected/a5754f16-70b2-4f07-96ae-5233861175bb-kube-api-access-7lh87\") pod \"community-operators-gpq4t\" (UID: \"a5754f16-70b2-4f07-96ae-5233861175bb\") " pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:23:56 crc kubenswrapper[4933]: I1201 10:23:56.000345 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5754f16-70b2-4f07-96ae-5233861175bb-catalog-content\") pod \"community-operators-gpq4t\" (UID: \"a5754f16-70b2-4f07-96ae-5233861175bb\") " pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:23:56 crc kubenswrapper[4933]: I1201 10:23:56.000368 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5754f16-70b2-4f07-96ae-5233861175bb-utilities\") pod \"community-operators-gpq4t\" (UID: \"a5754f16-70b2-4f07-96ae-5233861175bb\") " pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:23:56 crc kubenswrapper[4933]: I1201 10:23:56.025124 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lh87\" (UniqueName: \"kubernetes.io/projected/a5754f16-70b2-4f07-96ae-5233861175bb-kube-api-access-7lh87\") pod \"community-operators-gpq4t\" (UID: \"a5754f16-70b2-4f07-96ae-5233861175bb\") " pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:23:56 crc kubenswrapper[4933]: I1201 10:23:56.195807 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:23:56 crc kubenswrapper[4933]: I1201 10:23:56.807597 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gpq4t"] Dec 01 10:23:57 crc kubenswrapper[4933]: I1201 10:23:57.617791 4933 generic.go:334] "Generic (PLEG): container finished" podID="a5754f16-70b2-4f07-96ae-5233861175bb" containerID="be3692a481c923a77f3cbb5bf4e58f5043ba12a79a9adea88634adbe85b5489b" exitCode=0 Dec 01 10:23:57 crc kubenswrapper[4933]: I1201 10:23:57.618577 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpq4t" event={"ID":"a5754f16-70b2-4f07-96ae-5233861175bb","Type":"ContainerDied","Data":"be3692a481c923a77f3cbb5bf4e58f5043ba12a79a9adea88634adbe85b5489b"} Dec 01 10:23:57 crc kubenswrapper[4933]: I1201 10:23:57.618783 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpq4t" event={"ID":"a5754f16-70b2-4f07-96ae-5233861175bb","Type":"ContainerStarted","Data":"c58acfbe399e21bb6a921999fc85ce379e4734fb099f081a44f47ae7cada7956"} Dec 01 10:23:57 crc kubenswrapper[4933]: I1201 10:23:57.621416 4933 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 10:24:06 crc kubenswrapper[4933]: I1201 10:24:06.668237 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:24:06 crc kubenswrapper[4933]: E1201 10:24:06.669174 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:24:06 crc kubenswrapper[4933]: I1201 10:24:06.723898 4933 generic.go:334] "Generic (PLEG): container finished" podID="a5754f16-70b2-4f07-96ae-5233861175bb" containerID="a7b10e975e3d031d60ac5eb72fa9c11f5280b52f893d8633ccdf5f23ba4e7764" exitCode=0 Dec 01 10:24:06 crc kubenswrapper[4933]: I1201 10:24:06.723981 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpq4t" event={"ID":"a5754f16-70b2-4f07-96ae-5233861175bb","Type":"ContainerDied","Data":"a7b10e975e3d031d60ac5eb72fa9c11f5280b52f893d8633ccdf5f23ba4e7764"} Dec 01 10:24:09 crc kubenswrapper[4933]: I1201 10:24:09.759659 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpq4t" event={"ID":"a5754f16-70b2-4f07-96ae-5233861175bb","Type":"ContainerStarted","Data":"3e334f2fbf814434f87df11400f4cf1b3ccaf17550a85264012d18661262c79a"} Dec 01 10:24:09 crc kubenswrapper[4933]: I1201 10:24:09.784021 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gpq4t" podStartSLOduration=3.745983052 podStartE2EDuration="14.783983576s" podCreationTimestamp="2025-12-01 10:23:55 +0000 UTC" firstStartedPulling="2025-12-01 10:23:57.621129587 +0000 UTC m=+3128.262853202" lastFinishedPulling="2025-12-01 10:24:08.659130111 +0000 UTC m=+3139.300853726" observedRunningTime="2025-12-01 10:24:09.781407082 +0000 UTC m=+3140.423130707" watchObservedRunningTime="2025-12-01 10:24:09.783983576 +0000 UTC m=+3140.425707191" Dec 01 10:24:16 crc kubenswrapper[4933]: I1201 10:24:16.196393 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:24:16 crc kubenswrapper[4933]: I1201 10:24:16.197525 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:24:16 crc kubenswrapper[4933]: I1201 10:24:16.256120 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:24:16 crc kubenswrapper[4933]: I1201 10:24:16.911110 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gpq4t" Dec 01 10:24:17 crc kubenswrapper[4933]: I1201 10:24:17.033441 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gpq4t"] Dec 01 10:24:17 crc kubenswrapper[4933]: I1201 10:24:17.088177 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlplx"] Dec 01 10:24:17 crc kubenswrapper[4933]: I1201 10:24:17.089233 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dlplx" podUID="0bf9bb02-5235-4319-9c5d-44b8228fd6eb" containerName="registry-server" containerID="cri-o://15acbeec46c7b1ed0ab6d37d2220689c3bc60ff82e37998fa89edcd2de5f0f33" gracePeriod=2 Dec 01 10:24:17 crc kubenswrapper[4933]: I1201 10:24:17.860549 4933 generic.go:334] "Generic (PLEG): container finished" podID="0bf9bb02-5235-4319-9c5d-44b8228fd6eb" containerID="15acbeec46c7b1ed0ab6d37d2220689c3bc60ff82e37998fa89edcd2de5f0f33" exitCode=0 Dec 01 10:24:17 crc kubenswrapper[4933]: I1201 10:24:17.861635 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlplx" event={"ID":"0bf9bb02-5235-4319-9c5d-44b8228fd6eb","Type":"ContainerDied","Data":"15acbeec46c7b1ed0ab6d37d2220689c3bc60ff82e37998fa89edcd2de5f0f33"} Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.245395 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlplx" Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.358729 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-utilities\") pod \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\" (UID: \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\") " Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.358898 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r22tb\" (UniqueName: \"kubernetes.io/projected/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-kube-api-access-r22tb\") pod \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\" (UID: \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\") " Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.359080 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-catalog-content\") pod \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\" (UID: \"0bf9bb02-5235-4319-9c5d-44b8228fd6eb\") " Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.359357 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-utilities" (OuterVolumeSpecName: "utilities") pod "0bf9bb02-5235-4319-9c5d-44b8228fd6eb" (UID: "0bf9bb02-5235-4319-9c5d-44b8228fd6eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.359735 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.374014 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-kube-api-access-r22tb" (OuterVolumeSpecName: "kube-api-access-r22tb") pod "0bf9bb02-5235-4319-9c5d-44b8228fd6eb" (UID: "0bf9bb02-5235-4319-9c5d-44b8228fd6eb"). InnerVolumeSpecName "kube-api-access-r22tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.462001 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r22tb\" (UniqueName: \"kubernetes.io/projected/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-kube-api-access-r22tb\") on node \"crc\" DevicePath \"\"" Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.534834 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bf9bb02-5235-4319-9c5d-44b8228fd6eb" (UID: "0bf9bb02-5235-4319-9c5d-44b8228fd6eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.564013 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf9bb02-5235-4319-9c5d-44b8228fd6eb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.878520 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlplx" event={"ID":"0bf9bb02-5235-4319-9c5d-44b8228fd6eb","Type":"ContainerDied","Data":"77bcc6b057d4bac935c85655dccf67b454381648a3613e3caf266c1bf5122a43"} Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.878634 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlplx" Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.878648 4933 scope.go:117] "RemoveContainer" containerID="15acbeec46c7b1ed0ab6d37d2220689c3bc60ff82e37998fa89edcd2de5f0f33" Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.923489 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlplx"] Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.926062 4933 scope.go:117] "RemoveContainer" containerID="5badca8253fe3d2d240a79e91992cea9a4451e3bef8560dd8bf00f69d91c96f0" Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.934517 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dlplx"] Dec 01 10:24:18 crc kubenswrapper[4933]: I1201 10:24:18.950816 4933 scope.go:117] "RemoveContainer" containerID="ce63f5ad92406a26875530cd4c1f6510ee9b38f8f16af89e1e35ead8f118b3d4" Dec 01 10:24:19 crc kubenswrapper[4933]: I1201 10:24:19.681036 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf9bb02-5235-4319-9c5d-44b8228fd6eb" path="/var/lib/kubelet/pods/0bf9bb02-5235-4319-9c5d-44b8228fd6eb/volumes" Dec 01 10:24:20 crc kubenswrapper[4933]: I1201 10:24:20.668190 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:24:20 crc kubenswrapper[4933]: E1201 10:24:20.668535 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:24:31 crc kubenswrapper[4933]: I1201 10:24:31.668946 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:24:31 crc kubenswrapper[4933]: E1201 10:24:31.669950 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:24:44 crc kubenswrapper[4933]: I1201 10:24:44.668812 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:24:44 crc kubenswrapper[4933]: E1201 10:24:44.669734 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:24:58 crc kubenswrapper[4933]: I1201 10:24:58.668065 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:24:58 crc kubenswrapper[4933]: E1201 10:24:58.670877 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:25:13 crc kubenswrapper[4933]: I1201 10:25:13.668599 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:25:13 crc kubenswrapper[4933]: E1201 10:25:13.670023 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:25:24 crc kubenswrapper[4933]: I1201 10:25:24.667939 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:25:24 crc kubenswrapper[4933]: E1201 10:25:24.669813 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:25:37 crc kubenswrapper[4933]: I1201 10:25:37.668719 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:25:37 crc kubenswrapper[4933]: E1201 10:25:37.670168 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:25:50 crc kubenswrapper[4933]: I1201 10:25:50.668391 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:25:50 crc kubenswrapper[4933]: E1201 10:25:50.669200 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:26:02 crc kubenswrapper[4933]: I1201 10:26:02.667909 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:26:02 crc kubenswrapper[4933]: E1201 10:26:02.669012 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:26:17 crc kubenswrapper[4933]: I1201 10:26:17.668953 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:26:17 crc kubenswrapper[4933]: E1201 10:26:17.670361 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:26:30 crc kubenswrapper[4933]: I1201 10:26:30.668620 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:26:30 crc kubenswrapper[4933]: E1201 10:26:30.669799 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:26:45 crc kubenswrapper[4933]: I1201 10:26:45.668783 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:26:45 crc kubenswrapper[4933]: E1201 10:26:45.670017 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:26:57 crc kubenswrapper[4933]: I1201 10:26:57.668885 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:26:57 crc kubenswrapper[4933]: E1201 10:26:57.670211 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:27:08 crc kubenswrapper[4933]: I1201 10:27:08.668855 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:27:08 crc kubenswrapper[4933]: E1201 10:27:08.670206 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:27:21 crc kubenswrapper[4933]: I1201 10:27:21.669681 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:27:21 crc kubenswrapper[4933]: E1201 10:27:21.671064 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:27:34 crc kubenswrapper[4933]: I1201 10:27:34.668253 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:27:34 crc kubenswrapper[4933]: E1201 10:27:34.669142 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:27:47 crc kubenswrapper[4933]: I1201 10:27:47.668284 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:27:47 crc kubenswrapper[4933]: E1201 10:27:47.669446 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:28:00 crc kubenswrapper[4933]: I1201 10:28:00.668129 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:28:00 crc kubenswrapper[4933]: E1201 10:28:00.669222 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:28:13 crc kubenswrapper[4933]: I1201 10:28:13.668746 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:28:13 crc kubenswrapper[4933]: E1201 10:28:13.670129 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:28:28 crc kubenswrapper[4933]: I1201 10:28:28.668550 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:28:28 crc kubenswrapper[4933]: E1201 10:28:28.669396 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:28:43 crc kubenswrapper[4933]: I1201 10:28:43.667840 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:28:44 crc kubenswrapper[4933]: I1201 10:28:44.516723 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"cf410e45128ad7bd3f91c85ab4abae6c39329b92f8a9232778c31943a501f4a2"} Dec 01 10:29:10 crc kubenswrapper[4933]: I1201 10:29:10.839084 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6fxdm"] Dec 01 10:29:10 crc kubenswrapper[4933]: E1201 10:29:10.844056 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf9bb02-5235-4319-9c5d-44b8228fd6eb" containerName="extract-utilities" Dec 01 10:29:10 crc kubenswrapper[4933]: I1201 10:29:10.844091 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf9bb02-5235-4319-9c5d-44b8228fd6eb" containerName="extract-utilities" Dec 01 10:29:10 crc kubenswrapper[4933]: E1201 10:29:10.844118 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf9bb02-5235-4319-9c5d-44b8228fd6eb" containerName="extract-content" Dec 01 10:29:10 crc kubenswrapper[4933]: I1201 10:29:10.844127 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf9bb02-5235-4319-9c5d-44b8228fd6eb" containerName="extract-content" Dec 01 10:29:10 crc kubenswrapper[4933]: E1201 10:29:10.844147 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf9bb02-5235-4319-9c5d-44b8228fd6eb" containerName="registry-server" Dec 01 10:29:10 crc kubenswrapper[4933]: I1201 10:29:10.844154 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf9bb02-5235-4319-9c5d-44b8228fd6eb" containerName="registry-server" Dec 01 10:29:10 crc kubenswrapper[4933]: I1201 10:29:10.844450 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf9bb02-5235-4319-9c5d-44b8228fd6eb" containerName="registry-server" Dec 01 10:29:10 crc kubenswrapper[4933]: I1201 10:29:10.846571 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:10 crc kubenswrapper[4933]: I1201 10:29:10.848467 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6fxdm"] Dec 01 10:29:10 crc kubenswrapper[4933]: I1201 10:29:10.907256 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9qw\" (UniqueName: \"kubernetes.io/projected/542e8eea-3af5-49c4-a055-a7f05425d238-kube-api-access-fw9qw\") pod \"redhat-marketplace-6fxdm\" (UID: \"542e8eea-3af5-49c4-a055-a7f05425d238\") " pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:10 crc kubenswrapper[4933]: I1201 10:29:10.907404 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/542e8eea-3af5-49c4-a055-a7f05425d238-catalog-content\") pod \"redhat-marketplace-6fxdm\" (UID: \"542e8eea-3af5-49c4-a055-a7f05425d238\") " pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:10 crc kubenswrapper[4933]: I1201 10:29:10.907499 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/542e8eea-3af5-49c4-a055-a7f05425d238-utilities\") pod \"redhat-marketplace-6fxdm\" (UID: \"542e8eea-3af5-49c4-a055-a7f05425d238\") " pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:11 crc kubenswrapper[4933]: I1201 10:29:11.009475 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9qw\" (UniqueName: \"kubernetes.io/projected/542e8eea-3af5-49c4-a055-a7f05425d238-kube-api-access-fw9qw\") pod \"redhat-marketplace-6fxdm\" (UID: \"542e8eea-3af5-49c4-a055-a7f05425d238\") " pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:11 crc kubenswrapper[4933]: I1201 10:29:11.010068 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/542e8eea-3af5-49c4-a055-a7f05425d238-catalog-content\") pod \"redhat-marketplace-6fxdm\" (UID: \"542e8eea-3af5-49c4-a055-a7f05425d238\") " pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:11 crc kubenswrapper[4933]: I1201 10:29:11.010713 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/542e8eea-3af5-49c4-a055-a7f05425d238-catalog-content\") pod \"redhat-marketplace-6fxdm\" (UID: \"542e8eea-3af5-49c4-a055-a7f05425d238\") " pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:11 crc kubenswrapper[4933]: I1201 10:29:11.010855 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/542e8eea-3af5-49c4-a055-a7f05425d238-utilities\") pod \"redhat-marketplace-6fxdm\" (UID: \"542e8eea-3af5-49c4-a055-a7f05425d238\") " pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:11 crc kubenswrapper[4933]: I1201 10:29:11.011203 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/542e8eea-3af5-49c4-a055-a7f05425d238-utilities\") pod \"redhat-marketplace-6fxdm\" (UID: \"542e8eea-3af5-49c4-a055-a7f05425d238\") " pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:11 crc kubenswrapper[4933]: I1201 10:29:11.029860 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9qw\" (UniqueName: \"kubernetes.io/projected/542e8eea-3af5-49c4-a055-a7f05425d238-kube-api-access-fw9qw\") pod \"redhat-marketplace-6fxdm\" (UID: \"542e8eea-3af5-49c4-a055-a7f05425d238\") " pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:11 crc kubenswrapper[4933]: I1201 10:29:11.183178 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:11 crc kubenswrapper[4933]: I1201 10:29:11.710131 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6fxdm"] Dec 01 10:29:11 crc kubenswrapper[4933]: I1201 10:29:11.829655 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6fxdm" event={"ID":"542e8eea-3af5-49c4-a055-a7f05425d238","Type":"ContainerStarted","Data":"aae81137cad1a29495548672834845764b6473d8322c9c95f4a56048d5771630"} Dec 01 10:29:12 crc kubenswrapper[4933]: I1201 10:29:12.840583 4933 generic.go:334] "Generic (PLEG): container finished" podID="542e8eea-3af5-49c4-a055-a7f05425d238" containerID="64833209f631afebca6969ee85a947db3e8b74daef9487c82de2b738bafef32e" exitCode=0 Dec 01 10:29:12 crc kubenswrapper[4933]: I1201 10:29:12.840637 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6fxdm" event={"ID":"542e8eea-3af5-49c4-a055-a7f05425d238","Type":"ContainerDied","Data":"64833209f631afebca6969ee85a947db3e8b74daef9487c82de2b738bafef32e"} Dec 01 10:29:12 crc kubenswrapper[4933]: I1201 10:29:12.843252 4933 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 10:29:14 crc kubenswrapper[4933]: I1201 10:29:14.859015 4933 generic.go:334] "Generic (PLEG): container finished" podID="542e8eea-3af5-49c4-a055-a7f05425d238" containerID="822d6f409e7a82e188c9c0c9ebf6411a0aa071282157e37cebcefaf3deb60d1d" exitCode=0 Dec 01 10:29:14 crc kubenswrapper[4933]: I1201 10:29:14.859203 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6fxdm" event={"ID":"542e8eea-3af5-49c4-a055-a7f05425d238","Type":"ContainerDied","Data":"822d6f409e7a82e188c9c0c9ebf6411a0aa071282157e37cebcefaf3deb60d1d"} Dec 01 10:29:15 crc kubenswrapper[4933]: I1201 10:29:15.871371 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6fxdm" event={"ID":"542e8eea-3af5-49c4-a055-a7f05425d238","Type":"ContainerStarted","Data":"9baae068b08ae541188407f15d61ef1f2468fe5856e25293d47447a4e6a9a63c"} Dec 01 10:29:15 crc kubenswrapper[4933]: I1201 10:29:15.897314 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6fxdm" podStartSLOduration=3.451099567 podStartE2EDuration="5.897278632s" podCreationTimestamp="2025-12-01 10:29:10 +0000 UTC" firstStartedPulling="2025-12-01 10:29:12.842949109 +0000 UTC m=+3443.484672724" lastFinishedPulling="2025-12-01 10:29:15.289128174 +0000 UTC m=+3445.930851789" observedRunningTime="2025-12-01 10:29:15.888298798 +0000 UTC m=+3446.530022423" watchObservedRunningTime="2025-12-01 10:29:15.897278632 +0000 UTC m=+3446.539002247" Dec 01 10:29:21 crc kubenswrapper[4933]: I1201 10:29:21.183846 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:21 crc kubenswrapper[4933]: I1201 10:29:21.184745 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:21 crc kubenswrapper[4933]: I1201 10:29:21.283174 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:21 crc kubenswrapper[4933]: I1201 10:29:21.968259 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.034507 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p2gc4"] Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.037174 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.063855 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2gc4"] Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.156013 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b328bac-e871-469f-89b1-38f315870a5f-catalog-content\") pod \"certified-operators-p2gc4\" (UID: \"3b328bac-e871-469f-89b1-38f315870a5f\") " pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.156177 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b328bac-e871-469f-89b1-38f315870a5f-utilities\") pod \"certified-operators-p2gc4\" (UID: \"3b328bac-e871-469f-89b1-38f315870a5f\") " pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.156226 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glv48\" (UniqueName: \"kubernetes.io/projected/3b328bac-e871-469f-89b1-38f315870a5f-kube-api-access-glv48\") pod \"certified-operators-p2gc4\" (UID: \"3b328bac-e871-469f-89b1-38f315870a5f\") " pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.257709 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b328bac-e871-469f-89b1-38f315870a5f-utilities\") pod \"certified-operators-p2gc4\" (UID: \"3b328bac-e871-469f-89b1-38f315870a5f\") " pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.257770 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glv48\" (UniqueName: \"kubernetes.io/projected/3b328bac-e871-469f-89b1-38f315870a5f-kube-api-access-glv48\") pod \"certified-operators-p2gc4\" (UID: \"3b328bac-e871-469f-89b1-38f315870a5f\") " pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.257856 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b328bac-e871-469f-89b1-38f315870a5f-catalog-content\") pod \"certified-operators-p2gc4\" (UID: \"3b328bac-e871-469f-89b1-38f315870a5f\") " pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.258372 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b328bac-e871-469f-89b1-38f315870a5f-catalog-content\") pod \"certified-operators-p2gc4\" (UID: \"3b328bac-e871-469f-89b1-38f315870a5f\") " pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.258586 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b328bac-e871-469f-89b1-38f315870a5f-utilities\") pod \"certified-operators-p2gc4\" (UID: \"3b328bac-e871-469f-89b1-38f315870a5f\") " pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.279607 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glv48\" (UniqueName: \"kubernetes.io/projected/3b328bac-e871-469f-89b1-38f315870a5f-kube-api-access-glv48\") pod \"certified-operators-p2gc4\" (UID: \"3b328bac-e871-469f-89b1-38f315870a5f\") " pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.373710 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.627812 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6fxdm"] Dec 01 10:29:22 crc kubenswrapper[4933]: I1201 10:29:22.976550 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2gc4"] Dec 01 10:29:23 crc kubenswrapper[4933]: I1201 10:29:23.946963 4933 generic.go:334] "Generic (PLEG): container finished" podID="3b328bac-e871-469f-89b1-38f315870a5f" containerID="e06027b7b93e0a7239009b5469cb0eff97adeb24fb5a8f299a02276dea5c9dc3" exitCode=0 Dec 01 10:29:23 crc kubenswrapper[4933]: I1201 10:29:23.947137 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2gc4" event={"ID":"3b328bac-e871-469f-89b1-38f315870a5f","Type":"ContainerDied","Data":"e06027b7b93e0a7239009b5469cb0eff97adeb24fb5a8f299a02276dea5c9dc3"} Dec 01 10:29:23 crc kubenswrapper[4933]: I1201 10:29:23.947806 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2gc4" event={"ID":"3b328bac-e871-469f-89b1-38f315870a5f","Type":"ContainerStarted","Data":"0b3dea00da55f700728a987de51fb70d5bbd6ee4deb89b8418589baead89e497"} Dec 01 10:29:23 crc kubenswrapper[4933]: I1201 10:29:23.947852 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6fxdm" podUID="542e8eea-3af5-49c4-a055-a7f05425d238" containerName="registry-server" containerID="cri-o://9baae068b08ae541188407f15d61ef1f2468fe5856e25293d47447a4e6a9a63c" gracePeriod=2 Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.454708 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.503699 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw9qw\" (UniqueName: \"kubernetes.io/projected/542e8eea-3af5-49c4-a055-a7f05425d238-kube-api-access-fw9qw\") pod \"542e8eea-3af5-49c4-a055-a7f05425d238\" (UID: \"542e8eea-3af5-49c4-a055-a7f05425d238\") " Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.503934 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/542e8eea-3af5-49c4-a055-a7f05425d238-utilities\") pod \"542e8eea-3af5-49c4-a055-a7f05425d238\" (UID: \"542e8eea-3af5-49c4-a055-a7f05425d238\") " Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.504072 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/542e8eea-3af5-49c4-a055-a7f05425d238-catalog-content\") pod \"542e8eea-3af5-49c4-a055-a7f05425d238\" (UID: \"542e8eea-3af5-49c4-a055-a7f05425d238\") " Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.505066 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/542e8eea-3af5-49c4-a055-a7f05425d238-utilities" (OuterVolumeSpecName: "utilities") pod "542e8eea-3af5-49c4-a055-a7f05425d238" (UID: "542e8eea-3af5-49c4-a055-a7f05425d238"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.513033 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/542e8eea-3af5-49c4-a055-a7f05425d238-kube-api-access-fw9qw" (OuterVolumeSpecName: "kube-api-access-fw9qw") pod "542e8eea-3af5-49c4-a055-a7f05425d238" (UID: "542e8eea-3af5-49c4-a055-a7f05425d238"). InnerVolumeSpecName "kube-api-access-fw9qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.524994 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/542e8eea-3af5-49c4-a055-a7f05425d238-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "542e8eea-3af5-49c4-a055-a7f05425d238" (UID: "542e8eea-3af5-49c4-a055-a7f05425d238"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.606048 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/542e8eea-3af5-49c4-a055-a7f05425d238-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.606076 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw9qw\" (UniqueName: \"kubernetes.io/projected/542e8eea-3af5-49c4-a055-a7f05425d238-kube-api-access-fw9qw\") on node \"crc\" DevicePath \"\"" Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.606089 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/542e8eea-3af5-49c4-a055-a7f05425d238-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.958286 4933 generic.go:334] "Generic (PLEG): container finished" podID="542e8eea-3af5-49c4-a055-a7f05425d238" containerID="9baae068b08ae541188407f15d61ef1f2468fe5856e25293d47447a4e6a9a63c" exitCode=0 Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.958347 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6fxdm" event={"ID":"542e8eea-3af5-49c4-a055-a7f05425d238","Type":"ContainerDied","Data":"9baae068b08ae541188407f15d61ef1f2468fe5856e25293d47447a4e6a9a63c"} Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.958385 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6fxdm" Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.958398 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6fxdm" event={"ID":"542e8eea-3af5-49c4-a055-a7f05425d238","Type":"ContainerDied","Data":"aae81137cad1a29495548672834845764b6473d8322c9c95f4a56048d5771630"} Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.958415 4933 scope.go:117] "RemoveContainer" containerID="9baae068b08ae541188407f15d61ef1f2468fe5856e25293d47447a4e6a9a63c" Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.989283 4933 scope.go:117] "RemoveContainer" containerID="822d6f409e7a82e188c9c0c9ebf6411a0aa071282157e37cebcefaf3deb60d1d" Dec 01 10:29:24 crc kubenswrapper[4933]: I1201 10:29:24.998612 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6fxdm"] Dec 01 10:29:25 crc kubenswrapper[4933]: I1201 10:29:25.007059 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6fxdm"] Dec 01 10:29:25 crc kubenswrapper[4933]: I1201 10:29:25.020590 4933 scope.go:117] "RemoveContainer" containerID="64833209f631afebca6969ee85a947db3e8b74daef9487c82de2b738bafef32e" Dec 01 10:29:25 crc kubenswrapper[4933]: I1201 10:29:25.052373 4933 scope.go:117] "RemoveContainer" containerID="9baae068b08ae541188407f15d61ef1f2468fe5856e25293d47447a4e6a9a63c" Dec 01 10:29:25 crc kubenswrapper[4933]: E1201 10:29:25.053075 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9baae068b08ae541188407f15d61ef1f2468fe5856e25293d47447a4e6a9a63c\": container with ID starting with 9baae068b08ae541188407f15d61ef1f2468fe5856e25293d47447a4e6a9a63c not found: ID does not exist" containerID="9baae068b08ae541188407f15d61ef1f2468fe5856e25293d47447a4e6a9a63c" Dec 01 10:29:25 crc kubenswrapper[4933]: I1201 10:29:25.053109 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9baae068b08ae541188407f15d61ef1f2468fe5856e25293d47447a4e6a9a63c"} err="failed to get container status \"9baae068b08ae541188407f15d61ef1f2468fe5856e25293d47447a4e6a9a63c\": rpc error: code = NotFound desc = could not find container \"9baae068b08ae541188407f15d61ef1f2468fe5856e25293d47447a4e6a9a63c\": container with ID starting with 9baae068b08ae541188407f15d61ef1f2468fe5856e25293d47447a4e6a9a63c not found: ID does not exist" Dec 01 10:29:25 crc kubenswrapper[4933]: I1201 10:29:25.053131 4933 scope.go:117] "RemoveContainer" containerID="822d6f409e7a82e188c9c0c9ebf6411a0aa071282157e37cebcefaf3deb60d1d" Dec 01 10:29:25 crc kubenswrapper[4933]: E1201 10:29:25.054600 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"822d6f409e7a82e188c9c0c9ebf6411a0aa071282157e37cebcefaf3deb60d1d\": container with ID starting with 822d6f409e7a82e188c9c0c9ebf6411a0aa071282157e37cebcefaf3deb60d1d not found: ID does not exist" containerID="822d6f409e7a82e188c9c0c9ebf6411a0aa071282157e37cebcefaf3deb60d1d" Dec 01 10:29:25 crc kubenswrapper[4933]: I1201 10:29:25.054622 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"822d6f409e7a82e188c9c0c9ebf6411a0aa071282157e37cebcefaf3deb60d1d"} err="failed to get container status \"822d6f409e7a82e188c9c0c9ebf6411a0aa071282157e37cebcefaf3deb60d1d\": rpc error: code = NotFound desc = could not find container \"822d6f409e7a82e188c9c0c9ebf6411a0aa071282157e37cebcefaf3deb60d1d\": container with ID starting with 822d6f409e7a82e188c9c0c9ebf6411a0aa071282157e37cebcefaf3deb60d1d not found: ID does not exist" Dec 01 10:29:25 crc kubenswrapper[4933]: I1201 10:29:25.054635 4933 scope.go:117] "RemoveContainer" containerID="64833209f631afebca6969ee85a947db3e8b74daef9487c82de2b738bafef32e" Dec 01 10:29:25 crc kubenswrapper[4933]: E1201 10:29:25.054851 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64833209f631afebca6969ee85a947db3e8b74daef9487c82de2b738bafef32e\": container with ID starting with 64833209f631afebca6969ee85a947db3e8b74daef9487c82de2b738bafef32e not found: ID does not exist" containerID="64833209f631afebca6969ee85a947db3e8b74daef9487c82de2b738bafef32e" Dec 01 10:29:25 crc kubenswrapper[4933]: I1201 10:29:25.054874 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64833209f631afebca6969ee85a947db3e8b74daef9487c82de2b738bafef32e"} err="failed to get container status \"64833209f631afebca6969ee85a947db3e8b74daef9487c82de2b738bafef32e\": rpc error: code = NotFound desc = could not find container \"64833209f631afebca6969ee85a947db3e8b74daef9487c82de2b738bafef32e\": container with ID starting with 64833209f631afebca6969ee85a947db3e8b74daef9487c82de2b738bafef32e not found: ID does not exist" Dec 01 10:29:25 crc kubenswrapper[4933]: I1201 10:29:25.679156 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="542e8eea-3af5-49c4-a055-a7f05425d238" path="/var/lib/kubelet/pods/542e8eea-3af5-49c4-a055-a7f05425d238/volumes" Dec 01 10:29:25 crc kubenswrapper[4933]: I1201 10:29:25.976126 4933 generic.go:334] "Generic (PLEG): container finished" podID="3b328bac-e871-469f-89b1-38f315870a5f" containerID="256b7f9b4940f0bc4906d4ef2bbfbea4072578c5d32e4a51b90483eb336d1bc1" exitCode=0 Dec 01 10:29:25 crc kubenswrapper[4933]: I1201 10:29:25.976172 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2gc4" event={"ID":"3b328bac-e871-469f-89b1-38f315870a5f","Type":"ContainerDied","Data":"256b7f9b4940f0bc4906d4ef2bbfbea4072578c5d32e4a51b90483eb336d1bc1"} Dec 01 10:29:26 crc kubenswrapper[4933]: I1201 10:29:26.992640 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2gc4" event={"ID":"3b328bac-e871-469f-89b1-38f315870a5f","Type":"ContainerStarted","Data":"61cd7c8fd99173c4fed6c6288410f0829fef34afde1df1192e8dcdaa5326c41d"} Dec 01 10:29:27 crc kubenswrapper[4933]: I1201 10:29:27.011281 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p2gc4" podStartSLOduration=3.3793208249999998 podStartE2EDuration="6.011264699s" podCreationTimestamp="2025-12-01 10:29:21 +0000 UTC" firstStartedPulling="2025-12-01 10:29:23.951698537 +0000 UTC m=+3454.593422152" lastFinishedPulling="2025-12-01 10:29:26.583642411 +0000 UTC m=+3457.225366026" observedRunningTime="2025-12-01 10:29:27.0108793 +0000 UTC m=+3457.652602925" watchObservedRunningTime="2025-12-01 10:29:27.011264699 +0000 UTC m=+3457.652988314" Dec 01 10:29:32 crc kubenswrapper[4933]: I1201 10:29:32.374902 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:32 crc kubenswrapper[4933]: I1201 10:29:32.376886 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:32 crc kubenswrapper[4933]: I1201 10:29:32.440557 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:33 crc kubenswrapper[4933]: I1201 10:29:33.107144 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:33 crc kubenswrapper[4933]: I1201 10:29:33.166542 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2gc4"] Dec 01 10:29:35 crc kubenswrapper[4933]: I1201 10:29:35.082721 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p2gc4" podUID="3b328bac-e871-469f-89b1-38f315870a5f" containerName="registry-server" containerID="cri-o://61cd7c8fd99173c4fed6c6288410f0829fef34afde1df1192e8dcdaa5326c41d" gracePeriod=2 Dec 01 10:29:35 crc kubenswrapper[4933]: I1201 10:29:35.576282 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:35 crc kubenswrapper[4933]: I1201 10:29:35.658237 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b328bac-e871-469f-89b1-38f315870a5f-utilities\") pod \"3b328bac-e871-469f-89b1-38f315870a5f\" (UID: \"3b328bac-e871-469f-89b1-38f315870a5f\") " Dec 01 10:29:35 crc kubenswrapper[4933]: I1201 10:29:35.658547 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b328bac-e871-469f-89b1-38f315870a5f-catalog-content\") pod \"3b328bac-e871-469f-89b1-38f315870a5f\" (UID: \"3b328bac-e871-469f-89b1-38f315870a5f\") " Dec 01 10:29:35 crc kubenswrapper[4933]: I1201 10:29:35.658604 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glv48\" (UniqueName: \"kubernetes.io/projected/3b328bac-e871-469f-89b1-38f315870a5f-kube-api-access-glv48\") pod \"3b328bac-e871-469f-89b1-38f315870a5f\" (UID: \"3b328bac-e871-469f-89b1-38f315870a5f\") " Dec 01 10:29:35 crc kubenswrapper[4933]: I1201 10:29:35.659249 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b328bac-e871-469f-89b1-38f315870a5f-utilities" (OuterVolumeSpecName: "utilities") pod "3b328bac-e871-469f-89b1-38f315870a5f" (UID: "3b328bac-e871-469f-89b1-38f315870a5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:29:35 crc kubenswrapper[4933]: I1201 10:29:35.659594 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b328bac-e871-469f-89b1-38f315870a5f-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:29:35 crc kubenswrapper[4933]: I1201 10:29:35.664523 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b328bac-e871-469f-89b1-38f315870a5f-kube-api-access-glv48" (OuterVolumeSpecName: "kube-api-access-glv48") pod "3b328bac-e871-469f-89b1-38f315870a5f" (UID: "3b328bac-e871-469f-89b1-38f315870a5f"). InnerVolumeSpecName "kube-api-access-glv48". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:29:35 crc kubenswrapper[4933]: I1201 10:29:35.719094 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b328bac-e871-469f-89b1-38f315870a5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b328bac-e871-469f-89b1-38f315870a5f" (UID: "3b328bac-e871-469f-89b1-38f315870a5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:29:35 crc kubenswrapper[4933]: I1201 10:29:35.761859 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b328bac-e871-469f-89b1-38f315870a5f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:29:35 crc kubenswrapper[4933]: I1201 10:29:35.761896 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glv48\" (UniqueName: \"kubernetes.io/projected/3b328bac-e871-469f-89b1-38f315870a5f-kube-api-access-glv48\") on node \"crc\" DevicePath \"\"" Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.096693 4933 generic.go:334] "Generic (PLEG): container finished" podID="3b328bac-e871-469f-89b1-38f315870a5f" containerID="61cd7c8fd99173c4fed6c6288410f0829fef34afde1df1192e8dcdaa5326c41d" exitCode=0 Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.096776 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2gc4" event={"ID":"3b328bac-e871-469f-89b1-38f315870a5f","Type":"ContainerDied","Data":"61cd7c8fd99173c4fed6c6288410f0829fef34afde1df1192e8dcdaa5326c41d"} Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.097266 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2gc4" event={"ID":"3b328bac-e871-469f-89b1-38f315870a5f","Type":"ContainerDied","Data":"0b3dea00da55f700728a987de51fb70d5bbd6ee4deb89b8418589baead89e497"} Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.096845 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2gc4" Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.097306 4933 scope.go:117] "RemoveContainer" containerID="61cd7c8fd99173c4fed6c6288410f0829fef34afde1df1192e8dcdaa5326c41d" Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.126741 4933 scope.go:117] "RemoveContainer" containerID="256b7f9b4940f0bc4906d4ef2bbfbea4072578c5d32e4a51b90483eb336d1bc1" Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.142166 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2gc4"] Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.155585 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p2gc4"] Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.158758 4933 scope.go:117] "RemoveContainer" containerID="e06027b7b93e0a7239009b5469cb0eff97adeb24fb5a8f299a02276dea5c9dc3" Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.209650 4933 scope.go:117] "RemoveContainer" containerID="61cd7c8fd99173c4fed6c6288410f0829fef34afde1df1192e8dcdaa5326c41d" Dec 01 10:29:36 crc kubenswrapper[4933]: E1201 10:29:36.210582 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61cd7c8fd99173c4fed6c6288410f0829fef34afde1df1192e8dcdaa5326c41d\": container with ID starting with 61cd7c8fd99173c4fed6c6288410f0829fef34afde1df1192e8dcdaa5326c41d not found: ID does not exist" containerID="61cd7c8fd99173c4fed6c6288410f0829fef34afde1df1192e8dcdaa5326c41d" Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.210644 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61cd7c8fd99173c4fed6c6288410f0829fef34afde1df1192e8dcdaa5326c41d"} err="failed to get container status \"61cd7c8fd99173c4fed6c6288410f0829fef34afde1df1192e8dcdaa5326c41d\": rpc error: code = NotFound desc = could not find container \"61cd7c8fd99173c4fed6c6288410f0829fef34afde1df1192e8dcdaa5326c41d\": container with ID starting with 61cd7c8fd99173c4fed6c6288410f0829fef34afde1df1192e8dcdaa5326c41d not found: ID does not exist" Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.210680 4933 scope.go:117] "RemoveContainer" containerID="256b7f9b4940f0bc4906d4ef2bbfbea4072578c5d32e4a51b90483eb336d1bc1" Dec 01 10:29:36 crc kubenswrapper[4933]: E1201 10:29:36.211108 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"256b7f9b4940f0bc4906d4ef2bbfbea4072578c5d32e4a51b90483eb336d1bc1\": container with ID starting with 256b7f9b4940f0bc4906d4ef2bbfbea4072578c5d32e4a51b90483eb336d1bc1 not found: ID does not exist" containerID="256b7f9b4940f0bc4906d4ef2bbfbea4072578c5d32e4a51b90483eb336d1bc1" Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.211261 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"256b7f9b4940f0bc4906d4ef2bbfbea4072578c5d32e4a51b90483eb336d1bc1"} err="failed to get container status \"256b7f9b4940f0bc4906d4ef2bbfbea4072578c5d32e4a51b90483eb336d1bc1\": rpc error: code = NotFound desc = could not find container \"256b7f9b4940f0bc4906d4ef2bbfbea4072578c5d32e4a51b90483eb336d1bc1\": container with ID starting with 256b7f9b4940f0bc4906d4ef2bbfbea4072578c5d32e4a51b90483eb336d1bc1 not found: ID does not exist" Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.211295 4933 scope.go:117] "RemoveContainer" containerID="e06027b7b93e0a7239009b5469cb0eff97adeb24fb5a8f299a02276dea5c9dc3" Dec 01 10:29:36 crc kubenswrapper[4933]: E1201 10:29:36.211709 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e06027b7b93e0a7239009b5469cb0eff97adeb24fb5a8f299a02276dea5c9dc3\": container with ID starting with e06027b7b93e0a7239009b5469cb0eff97adeb24fb5a8f299a02276dea5c9dc3 not found: ID does not exist" containerID="e06027b7b93e0a7239009b5469cb0eff97adeb24fb5a8f299a02276dea5c9dc3" Dec 01 10:29:36 crc kubenswrapper[4933]: I1201 10:29:36.211750 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e06027b7b93e0a7239009b5469cb0eff97adeb24fb5a8f299a02276dea5c9dc3"} err="failed to get container status \"e06027b7b93e0a7239009b5469cb0eff97adeb24fb5a8f299a02276dea5c9dc3\": rpc error: code = NotFound desc = could not find container \"e06027b7b93e0a7239009b5469cb0eff97adeb24fb5a8f299a02276dea5c9dc3\": container with ID starting with e06027b7b93e0a7239009b5469cb0eff97adeb24fb5a8f299a02276dea5c9dc3 not found: ID does not exist" Dec 01 10:29:37 crc kubenswrapper[4933]: I1201 10:29:37.687927 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b328bac-e871-469f-89b1-38f315870a5f" path="/var/lib/kubelet/pods/3b328bac-e871-469f-89b1-38f315870a5f/volumes" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.167439 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj"] Dec 01 10:30:00 crc kubenswrapper[4933]: E1201 10:30:00.168742 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="542e8eea-3af5-49c4-a055-a7f05425d238" containerName="registry-server" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.168763 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="542e8eea-3af5-49c4-a055-a7f05425d238" containerName="registry-server" Dec 01 10:30:00 crc kubenswrapper[4933]: E1201 10:30:00.168787 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b328bac-e871-469f-89b1-38f315870a5f" containerName="registry-server" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.168795 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b328bac-e871-469f-89b1-38f315870a5f" containerName="registry-server" Dec 01 10:30:00 crc kubenswrapper[4933]: E1201 10:30:00.168820 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="542e8eea-3af5-49c4-a055-a7f05425d238" containerName="extract-utilities" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.168830 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="542e8eea-3af5-49c4-a055-a7f05425d238" containerName="extract-utilities" Dec 01 10:30:00 crc kubenswrapper[4933]: E1201 10:30:00.168860 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b328bac-e871-469f-89b1-38f315870a5f" containerName="extract-content" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.168868 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b328bac-e871-469f-89b1-38f315870a5f" containerName="extract-content" Dec 01 10:30:00 crc kubenswrapper[4933]: E1201 10:30:00.168911 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="542e8eea-3af5-49c4-a055-a7f05425d238" containerName="extract-content" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.168920 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="542e8eea-3af5-49c4-a055-a7f05425d238" containerName="extract-content" Dec 01 10:30:00 crc kubenswrapper[4933]: E1201 10:30:00.168942 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b328bac-e871-469f-89b1-38f315870a5f" containerName="extract-utilities" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.168950 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b328bac-e871-469f-89b1-38f315870a5f" containerName="extract-utilities" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.169224 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="542e8eea-3af5-49c4-a055-a7f05425d238" containerName="registry-server" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.169241 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b328bac-e871-469f-89b1-38f315870a5f" containerName="registry-server" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.170157 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.173928 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.174406 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.180165 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj"] Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.300544 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps8vr\" (UniqueName: \"kubernetes.io/projected/d3473399-9663-4e0c-a416-c13cb659ab94-kube-api-access-ps8vr\") pod \"collect-profiles-29409750-p8rdj\" (UID: \"d3473399-9663-4e0c-a416-c13cb659ab94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.300625 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3473399-9663-4e0c-a416-c13cb659ab94-config-volume\") pod \"collect-profiles-29409750-p8rdj\" (UID: \"d3473399-9663-4e0c-a416-c13cb659ab94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.300666 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3473399-9663-4e0c-a416-c13cb659ab94-secret-volume\") pod \"collect-profiles-29409750-p8rdj\" (UID: \"d3473399-9663-4e0c-a416-c13cb659ab94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.402383 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps8vr\" (UniqueName: \"kubernetes.io/projected/d3473399-9663-4e0c-a416-c13cb659ab94-kube-api-access-ps8vr\") pod \"collect-profiles-29409750-p8rdj\" (UID: \"d3473399-9663-4e0c-a416-c13cb659ab94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.402481 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3473399-9663-4e0c-a416-c13cb659ab94-config-volume\") pod \"collect-profiles-29409750-p8rdj\" (UID: \"d3473399-9663-4e0c-a416-c13cb659ab94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.402502 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3473399-9663-4e0c-a416-c13cb659ab94-secret-volume\") pod \"collect-profiles-29409750-p8rdj\" (UID: \"d3473399-9663-4e0c-a416-c13cb659ab94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.403625 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3473399-9663-4e0c-a416-c13cb659ab94-config-volume\") pod \"collect-profiles-29409750-p8rdj\" (UID: \"d3473399-9663-4e0c-a416-c13cb659ab94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.408974 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3473399-9663-4e0c-a416-c13cb659ab94-secret-volume\") pod \"collect-profiles-29409750-p8rdj\" (UID: \"d3473399-9663-4e0c-a416-c13cb659ab94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.442291 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps8vr\" (UniqueName: \"kubernetes.io/projected/d3473399-9663-4e0c-a416-c13cb659ab94-kube-api-access-ps8vr\") pod \"collect-profiles-29409750-p8rdj\" (UID: \"d3473399-9663-4e0c-a416-c13cb659ab94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" Dec 01 10:30:00 crc kubenswrapper[4933]: I1201 10:30:00.509674 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" Dec 01 10:30:01 crc kubenswrapper[4933]: I1201 10:30:01.055517 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj"] Dec 01 10:30:01 crc kubenswrapper[4933]: I1201 10:30:01.338332 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" event={"ID":"d3473399-9663-4e0c-a416-c13cb659ab94","Type":"ContainerStarted","Data":"2628db086f99fbe0d44b9f55ca297f4d62617a2a85ee9275c9e40a1523af60b8"} Dec 01 10:30:01 crc kubenswrapper[4933]: I1201 10:30:01.338693 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" event={"ID":"d3473399-9663-4e0c-a416-c13cb659ab94","Type":"ContainerStarted","Data":"e8e5a948c8d92c606e99cf3205a430fcce1c015abc9732eb45eddb00056a2407"} Dec 01 10:30:01 crc kubenswrapper[4933]: I1201 10:30:01.364388 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" podStartSLOduration=1.36436672 podStartE2EDuration="1.36436672s" podCreationTimestamp="2025-12-01 10:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 10:30:01.357676185 +0000 UTC m=+3491.999399830" watchObservedRunningTime="2025-12-01 10:30:01.36436672 +0000 UTC m=+3492.006090335" Dec 01 10:30:02 crc kubenswrapper[4933]: I1201 10:30:02.354886 4933 generic.go:334] "Generic (PLEG): container finished" podID="d3473399-9663-4e0c-a416-c13cb659ab94" containerID="2628db086f99fbe0d44b9f55ca297f4d62617a2a85ee9275c9e40a1523af60b8" exitCode=0 Dec 01 10:30:02 crc kubenswrapper[4933]: I1201 10:30:02.355013 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" event={"ID":"d3473399-9663-4e0c-a416-c13cb659ab94","Type":"ContainerDied","Data":"2628db086f99fbe0d44b9f55ca297f4d62617a2a85ee9275c9e40a1523af60b8"} Dec 01 10:30:03 crc kubenswrapper[4933]: I1201 10:30:03.841848 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" Dec 01 10:30:03 crc kubenswrapper[4933]: I1201 10:30:03.884247 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps8vr\" (UniqueName: \"kubernetes.io/projected/d3473399-9663-4e0c-a416-c13cb659ab94-kube-api-access-ps8vr\") pod \"d3473399-9663-4e0c-a416-c13cb659ab94\" (UID: \"d3473399-9663-4e0c-a416-c13cb659ab94\") " Dec 01 10:30:03 crc kubenswrapper[4933]: I1201 10:30:03.884440 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3473399-9663-4e0c-a416-c13cb659ab94-config-volume\") pod \"d3473399-9663-4e0c-a416-c13cb659ab94\" (UID: \"d3473399-9663-4e0c-a416-c13cb659ab94\") " Dec 01 10:30:03 crc kubenswrapper[4933]: I1201 10:30:03.884530 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3473399-9663-4e0c-a416-c13cb659ab94-secret-volume\") pod \"d3473399-9663-4e0c-a416-c13cb659ab94\" (UID: \"d3473399-9663-4e0c-a416-c13cb659ab94\") " Dec 01 10:30:03 crc kubenswrapper[4933]: I1201 10:30:03.889214 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3473399-9663-4e0c-a416-c13cb659ab94-config-volume" (OuterVolumeSpecName: "config-volume") pod "d3473399-9663-4e0c-a416-c13cb659ab94" (UID: "d3473399-9663-4e0c-a416-c13cb659ab94"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 10:30:03 crc kubenswrapper[4933]: I1201 10:30:03.895079 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3473399-9663-4e0c-a416-c13cb659ab94-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d3473399-9663-4e0c-a416-c13cb659ab94" (UID: "d3473399-9663-4e0c-a416-c13cb659ab94"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:30:03 crc kubenswrapper[4933]: I1201 10:30:03.895243 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3473399-9663-4e0c-a416-c13cb659ab94-kube-api-access-ps8vr" (OuterVolumeSpecName: "kube-api-access-ps8vr") pod "d3473399-9663-4e0c-a416-c13cb659ab94" (UID: "d3473399-9663-4e0c-a416-c13cb659ab94"). InnerVolumeSpecName "kube-api-access-ps8vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:30:03 crc kubenswrapper[4933]: I1201 10:30:03.987500 4933 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3473399-9663-4e0c-a416-c13cb659ab94-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 10:30:03 crc kubenswrapper[4933]: I1201 10:30:03.987589 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps8vr\" (UniqueName: \"kubernetes.io/projected/d3473399-9663-4e0c-a416-c13cb659ab94-kube-api-access-ps8vr\") on node \"crc\" DevicePath \"\"" Dec 01 10:30:03 crc kubenswrapper[4933]: I1201 10:30:03.987609 4933 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3473399-9663-4e0c-a416-c13cb659ab94-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 10:30:04 crc kubenswrapper[4933]: I1201 10:30:04.379105 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" event={"ID":"d3473399-9663-4e0c-a416-c13cb659ab94","Type":"ContainerDied","Data":"e8e5a948c8d92c606e99cf3205a430fcce1c015abc9732eb45eddb00056a2407"} Dec 01 10:30:04 crc kubenswrapper[4933]: I1201 10:30:04.379490 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e5a948c8d92c606e99cf3205a430fcce1c015abc9732eb45eddb00056a2407" Dec 01 10:30:04 crc kubenswrapper[4933]: I1201 10:30:04.379232 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409750-p8rdj" Dec 01 10:30:04 crc kubenswrapper[4933]: I1201 10:30:04.452732 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd"] Dec 01 10:30:04 crc kubenswrapper[4933]: I1201 10:30:04.468704 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409705-774rd"] Dec 01 10:30:04 crc kubenswrapper[4933]: E1201 10:30:04.560009 4933 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3473399_9663_4e0c_a416_c13cb659ab94.slice/crio-e8e5a948c8d92c606e99cf3205a430fcce1c015abc9732eb45eddb00056a2407\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3473399_9663_4e0c_a416_c13cb659ab94.slice\": RecentStats: unable to find data in memory cache]" Dec 01 10:30:05 crc kubenswrapper[4933]: I1201 10:30:05.680363 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a3113e5-0de6-4738-bedf-6650252f52ad" path="/var/lib/kubelet/pods/0a3113e5-0de6-4738-bedf-6650252f52ad/volumes" Dec 01 10:30:59 crc kubenswrapper[4933]: I1201 10:30:59.559484 4933 scope.go:117] "RemoveContainer" containerID="7d1ef5d4efb645300909768838bafe371032e09633e263bea5786720bc9d9405" Dec 01 10:31:11 crc kubenswrapper[4933]: I1201 10:31:11.740587 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:31:11 crc kubenswrapper[4933]: I1201 10:31:11.744185 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:31:41 crc kubenswrapper[4933]: I1201 10:31:41.741477 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:31:41 crc kubenswrapper[4933]: I1201 10:31:41.742500 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:32:11 crc kubenswrapper[4933]: I1201 10:32:11.741451 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:32:11 crc kubenswrapper[4933]: I1201 10:32:11.742190 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:32:11 crc kubenswrapper[4933]: I1201 10:32:11.742251 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 10:32:11 crc kubenswrapper[4933]: I1201 10:32:11.743366 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf410e45128ad7bd3f91c85ab4abae6c39329b92f8a9232778c31943a501f4a2"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 10:32:11 crc kubenswrapper[4933]: I1201 10:32:11.743449 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://cf410e45128ad7bd3f91c85ab4abae6c39329b92f8a9232778c31943a501f4a2" gracePeriod=600 Dec 01 10:32:12 crc kubenswrapper[4933]: I1201 10:32:12.861143 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="cf410e45128ad7bd3f91c85ab4abae6c39329b92f8a9232778c31943a501f4a2" exitCode=0 Dec 01 10:32:12 crc kubenswrapper[4933]: I1201 10:32:12.861198 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"cf410e45128ad7bd3f91c85ab4abae6c39329b92f8a9232778c31943a501f4a2"} Dec 01 10:32:12 crc kubenswrapper[4933]: I1201 10:32:12.861764 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3"} Dec 01 10:32:12 crc kubenswrapper[4933]: I1201 10:32:12.861786 4933 scope.go:117] "RemoveContainer" containerID="e49afaafca01a42e09c964706e7749f53d546d7299d3e328ac5824f037d816ae" Dec 01 10:32:18 crc kubenswrapper[4933]: I1201 10:32:18.930088 4933 generic.go:334] "Generic (PLEG): container finished" podID="c272594d-4d61-490a-a44d-0a82106c9a1f" containerID="3ac48c569ba74871b3f956d4f2a4a2b4c9b19baa8f233bdf504f6b93908840b2" exitCode=0 Dec 01 10:32:18 crc kubenswrapper[4933]: I1201 10:32:18.930187 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c272594d-4d61-490a-a44d-0a82106c9a1f","Type":"ContainerDied","Data":"3ac48c569ba74871b3f956d4f2a4a2b4c9b19baa8f233bdf504f6b93908840b2"} Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.467795 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.498236 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-ssh-key\") pod \"c272594d-4d61-490a-a44d-0a82106c9a1f\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.498372 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"c272594d-4d61-490a-a44d-0a82106c9a1f\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.498408 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c272594d-4d61-490a-a44d-0a82106c9a1f-config-data\") pod \"c272594d-4d61-490a-a44d-0a82106c9a1f\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.498451 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c272594d-4d61-490a-a44d-0a82106c9a1f-test-operator-ephemeral-temporary\") pod \"c272594d-4d61-490a-a44d-0a82106c9a1f\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.498573 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-ca-certs\") pod \"c272594d-4d61-490a-a44d-0a82106c9a1f\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.498668 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-openstack-config-secret\") pod \"c272594d-4d61-490a-a44d-0a82106c9a1f\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.498701 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c272594d-4d61-490a-a44d-0a82106c9a1f-test-operator-ephemeral-workdir\") pod \"c272594d-4d61-490a-a44d-0a82106c9a1f\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.498722 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jfs7\" (UniqueName: \"kubernetes.io/projected/c272594d-4d61-490a-a44d-0a82106c9a1f-kube-api-access-5jfs7\") pod \"c272594d-4d61-490a-a44d-0a82106c9a1f\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.498751 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c272594d-4d61-490a-a44d-0a82106c9a1f-openstack-config\") pod \"c272594d-4d61-490a-a44d-0a82106c9a1f\" (UID: \"c272594d-4d61-490a-a44d-0a82106c9a1f\") " Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.501325 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c272594d-4d61-490a-a44d-0a82106c9a1f-config-data" (OuterVolumeSpecName: "config-data") pod "c272594d-4d61-490a-a44d-0a82106c9a1f" (UID: "c272594d-4d61-490a-a44d-0a82106c9a1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.502615 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c272594d-4d61-490a-a44d-0a82106c9a1f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "c272594d-4d61-490a-a44d-0a82106c9a1f" (UID: "c272594d-4d61-490a-a44d-0a82106c9a1f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.510238 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c272594d-4d61-490a-a44d-0a82106c9a1f-kube-api-access-5jfs7" (OuterVolumeSpecName: "kube-api-access-5jfs7") pod "c272594d-4d61-490a-a44d-0a82106c9a1f" (UID: "c272594d-4d61-490a-a44d-0a82106c9a1f"). InnerVolumeSpecName "kube-api-access-5jfs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.517038 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c272594d-4d61-490a-a44d-0a82106c9a1f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "c272594d-4d61-490a-a44d-0a82106c9a1f" (UID: "c272594d-4d61-490a-a44d-0a82106c9a1f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.524935 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "c272594d-4d61-490a-a44d-0a82106c9a1f" (UID: "c272594d-4d61-490a-a44d-0a82106c9a1f"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.545055 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "c272594d-4d61-490a-a44d-0a82106c9a1f" (UID: "c272594d-4d61-490a-a44d-0a82106c9a1f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.553873 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c272594d-4d61-490a-a44d-0a82106c9a1f" (UID: "c272594d-4d61-490a-a44d-0a82106c9a1f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.556587 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c272594d-4d61-490a-a44d-0a82106c9a1f" (UID: "c272594d-4d61-490a-a44d-0a82106c9a1f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.567775 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c272594d-4d61-490a-a44d-0a82106c9a1f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c272594d-4d61-490a-a44d-0a82106c9a1f" (UID: "c272594d-4d61-490a-a44d-0a82106c9a1f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.600750 4933 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.600790 4933 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.600805 4933 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c272594d-4d61-490a-a44d-0a82106c9a1f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.600816 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jfs7\" (UniqueName: \"kubernetes.io/projected/c272594d-4d61-490a-a44d-0a82106c9a1f-kube-api-access-5jfs7\") on node \"crc\" DevicePath \"\"" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.600829 4933 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c272594d-4d61-490a-a44d-0a82106c9a1f-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.600840 4933 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c272594d-4d61-490a-a44d-0a82106c9a1f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.600878 4933 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.600891 4933 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c272594d-4d61-490a-a44d-0a82106c9a1f-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.600903 4933 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c272594d-4d61-490a-a44d-0a82106c9a1f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.620284 4933 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.703506 4933 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.947449 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c272594d-4d61-490a-a44d-0a82106c9a1f","Type":"ContainerDied","Data":"559424c60950274e1a7352968eeb26167c406bde0e5f6636ec64c0efe21649ae"} Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.947490 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="559424c60950274e1a7352968eeb26167c406bde0e5f6636ec64c0efe21649ae" Dec 01 10:32:20 crc kubenswrapper[4933]: I1201 10:32:20.947545 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.482626 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 01 10:32:31 crc kubenswrapper[4933]: E1201 10:32:31.483799 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c272594d-4d61-490a-a44d-0a82106c9a1f" containerName="tempest-tests-tempest-tests-runner" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.483817 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c272594d-4d61-490a-a44d-0a82106c9a1f" containerName="tempest-tests-tempest-tests-runner" Dec 01 10:32:31 crc kubenswrapper[4933]: E1201 10:32:31.483833 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3473399-9663-4e0c-a416-c13cb659ab94" containerName="collect-profiles" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.483844 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3473399-9663-4e0c-a416-c13cb659ab94" containerName="collect-profiles" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.484100 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="c272594d-4d61-490a-a44d-0a82106c9a1f" containerName="tempest-tests-tempest-tests-runner" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.484119 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3473399-9663-4e0c-a416-c13cb659ab94" containerName="collect-profiles" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.485030 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.491232 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mlp48" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.505321 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.535183 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"265d92de-a6f0-45ea-9175-15a4bb7c1716\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.535281 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c4lm\" (UniqueName: \"kubernetes.io/projected/265d92de-a6f0-45ea-9175-15a4bb7c1716-kube-api-access-7c4lm\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"265d92de-a6f0-45ea-9175-15a4bb7c1716\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.637288 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"265d92de-a6f0-45ea-9175-15a4bb7c1716\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.637441 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c4lm\" (UniqueName: \"kubernetes.io/projected/265d92de-a6f0-45ea-9175-15a4bb7c1716-kube-api-access-7c4lm\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"265d92de-a6f0-45ea-9175-15a4bb7c1716\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.637898 4933 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"265d92de-a6f0-45ea-9175-15a4bb7c1716\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.666250 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c4lm\" (UniqueName: \"kubernetes.io/projected/265d92de-a6f0-45ea-9175-15a4bb7c1716-kube-api-access-7c4lm\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"265d92de-a6f0-45ea-9175-15a4bb7c1716\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.668329 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"265d92de-a6f0-45ea-9175-15a4bb7c1716\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 10:32:31 crc kubenswrapper[4933]: I1201 10:32:31.810946 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 10:32:32 crc kubenswrapper[4933]: I1201 10:32:32.345217 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 01 10:32:32 crc kubenswrapper[4933]: W1201 10:32:32.354736 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod265d92de_a6f0_45ea_9175_15a4bb7c1716.slice/crio-e2a560663fda35530ec6f2073d94343d0cbae8aac73e9cddd80d79b87dcc01c8 WatchSource:0}: Error finding container e2a560663fda35530ec6f2073d94343d0cbae8aac73e9cddd80d79b87dcc01c8: Status 404 returned error can't find the container with id e2a560663fda35530ec6f2073d94343d0cbae8aac73e9cddd80d79b87dcc01c8 Dec 01 10:32:33 crc kubenswrapper[4933]: I1201 10:32:33.070064 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"265d92de-a6f0-45ea-9175-15a4bb7c1716","Type":"ContainerStarted","Data":"e2a560663fda35530ec6f2073d94343d0cbae8aac73e9cddd80d79b87dcc01c8"} Dec 01 10:32:34 crc kubenswrapper[4933]: I1201 10:32:34.081395 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"265d92de-a6f0-45ea-9175-15a4bb7c1716","Type":"ContainerStarted","Data":"290e79057c233ffabcf4424c5e4a4e0e0a09df4abfa6318455b2d2e87446e2be"} Dec 01 10:32:34 crc kubenswrapper[4933]: I1201 10:32:34.108415 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.667633506 podStartE2EDuration="3.1083877s" podCreationTimestamp="2025-12-01 10:32:31 +0000 UTC" firstStartedPulling="2025-12-01 10:32:32.357701055 +0000 UTC m=+3642.999424670" lastFinishedPulling="2025-12-01 10:32:33.798455249 +0000 UTC m=+3644.440178864" observedRunningTime="2025-12-01 10:32:34.098188247 +0000 UTC m=+3644.739911862" watchObservedRunningTime="2025-12-01 10:32:34.1083877 +0000 UTC m=+3644.750111335" Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.044115 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-srq4c/must-gather-8vkvd"] Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.047025 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/must-gather-8vkvd" Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.048477 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-srq4c"/"default-dockercfg-9kj9n" Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.048993 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-srq4c"/"openshift-service-ca.crt" Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.049928 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-srq4c"/"kube-root-ca.crt" Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.056096 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-srq4c/must-gather-8vkvd"] Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.125574 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/66506bb4-0cd2-435d-a8a3-efd4baf4a548-must-gather-output\") pod \"must-gather-8vkvd\" (UID: \"66506bb4-0cd2-435d-a8a3-efd4baf4a548\") " pod="openshift-must-gather-srq4c/must-gather-8vkvd" Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.125910 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmsfj\" (UniqueName: \"kubernetes.io/projected/66506bb4-0cd2-435d-a8a3-efd4baf4a548-kube-api-access-wmsfj\") pod \"must-gather-8vkvd\" (UID: \"66506bb4-0cd2-435d-a8a3-efd4baf4a548\") " pod="openshift-must-gather-srq4c/must-gather-8vkvd" Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.228031 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/66506bb4-0cd2-435d-a8a3-efd4baf4a548-must-gather-output\") pod \"must-gather-8vkvd\" (UID: \"66506bb4-0cd2-435d-a8a3-efd4baf4a548\") " pod="openshift-must-gather-srq4c/must-gather-8vkvd" Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.228461 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmsfj\" (UniqueName: \"kubernetes.io/projected/66506bb4-0cd2-435d-a8a3-efd4baf4a548-kube-api-access-wmsfj\") pod \"must-gather-8vkvd\" (UID: \"66506bb4-0cd2-435d-a8a3-efd4baf4a548\") " pod="openshift-must-gather-srq4c/must-gather-8vkvd" Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.228814 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/66506bb4-0cd2-435d-a8a3-efd4baf4a548-must-gather-output\") pod \"must-gather-8vkvd\" (UID: \"66506bb4-0cd2-435d-a8a3-efd4baf4a548\") " pod="openshift-must-gather-srq4c/must-gather-8vkvd" Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.248928 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmsfj\" (UniqueName: \"kubernetes.io/projected/66506bb4-0cd2-435d-a8a3-efd4baf4a548-kube-api-access-wmsfj\") pod \"must-gather-8vkvd\" (UID: \"66506bb4-0cd2-435d-a8a3-efd4baf4a548\") " pod="openshift-must-gather-srq4c/must-gather-8vkvd" Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.368814 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/must-gather-8vkvd" Dec 01 10:32:57 crc kubenswrapper[4933]: I1201 10:32:57.722600 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-srq4c/must-gather-8vkvd"] Dec 01 10:32:58 crc kubenswrapper[4933]: I1201 10:32:58.332272 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-srq4c/must-gather-8vkvd" event={"ID":"66506bb4-0cd2-435d-a8a3-efd4baf4a548","Type":"ContainerStarted","Data":"673a45d248195e00ca6f5e8f9ec55e2d9aab1e3fe89f6d8e02d866fb9a97a430"} Dec 01 10:33:02 crc kubenswrapper[4933]: I1201 10:33:02.381802 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-srq4c/must-gather-8vkvd" event={"ID":"66506bb4-0cd2-435d-a8a3-efd4baf4a548","Type":"ContainerStarted","Data":"8c2db9e9dd3f021dfed91ae25e13bb67cd61940a46ade445478d4db42cb9a568"} Dec 01 10:33:03 crc kubenswrapper[4933]: I1201 10:33:03.392543 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-srq4c/must-gather-8vkvd" event={"ID":"66506bb4-0cd2-435d-a8a3-efd4baf4a548","Type":"ContainerStarted","Data":"fd511bffa4845f96bd89b80049a8d84f9d820caa42aab1b9abf25f040d43e8f4"} Dec 01 10:33:03 crc kubenswrapper[4933]: I1201 10:33:03.412362 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-srq4c/must-gather-8vkvd" podStartSLOduration=2.221017449 podStartE2EDuration="6.412340469s" podCreationTimestamp="2025-12-01 10:32:57 +0000 UTC" firstStartedPulling="2025-12-01 10:32:57.713922467 +0000 UTC m=+3668.355646082" lastFinishedPulling="2025-12-01 10:33:01.905245477 +0000 UTC m=+3672.546969102" observedRunningTime="2025-12-01 10:33:03.409906809 +0000 UTC m=+3674.051630434" watchObservedRunningTime="2025-12-01 10:33:03.412340469 +0000 UTC m=+3674.054064094" Dec 01 10:33:06 crc kubenswrapper[4933]: I1201 10:33:06.230260 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-srq4c/crc-debug-smmp7"] Dec 01 10:33:06 crc kubenswrapper[4933]: I1201 10:33:06.233052 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/crc-debug-smmp7" Dec 01 10:33:06 crc kubenswrapper[4933]: I1201 10:33:06.273385 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aae55ce5-ef05-4f3a-b7f8-719b84b49bc9-host\") pod \"crc-debug-smmp7\" (UID: \"aae55ce5-ef05-4f3a-b7f8-719b84b49bc9\") " pod="openshift-must-gather-srq4c/crc-debug-smmp7" Dec 01 10:33:06 crc kubenswrapper[4933]: I1201 10:33:06.273816 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwnwn\" (UniqueName: \"kubernetes.io/projected/aae55ce5-ef05-4f3a-b7f8-719b84b49bc9-kube-api-access-mwnwn\") pod \"crc-debug-smmp7\" (UID: \"aae55ce5-ef05-4f3a-b7f8-719b84b49bc9\") " pod="openshift-must-gather-srq4c/crc-debug-smmp7" Dec 01 10:33:06 crc kubenswrapper[4933]: I1201 10:33:06.375953 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aae55ce5-ef05-4f3a-b7f8-719b84b49bc9-host\") pod \"crc-debug-smmp7\" (UID: \"aae55ce5-ef05-4f3a-b7f8-719b84b49bc9\") " pod="openshift-must-gather-srq4c/crc-debug-smmp7" Dec 01 10:33:06 crc kubenswrapper[4933]: I1201 10:33:06.376138 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwnwn\" (UniqueName: \"kubernetes.io/projected/aae55ce5-ef05-4f3a-b7f8-719b84b49bc9-kube-api-access-mwnwn\") pod \"crc-debug-smmp7\" (UID: \"aae55ce5-ef05-4f3a-b7f8-719b84b49bc9\") " pod="openshift-must-gather-srq4c/crc-debug-smmp7" Dec 01 10:33:06 crc kubenswrapper[4933]: I1201 10:33:06.376153 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aae55ce5-ef05-4f3a-b7f8-719b84b49bc9-host\") pod \"crc-debug-smmp7\" (UID: \"aae55ce5-ef05-4f3a-b7f8-719b84b49bc9\") " pod="openshift-must-gather-srq4c/crc-debug-smmp7" Dec 01 10:33:06 crc kubenswrapper[4933]: I1201 10:33:06.400254 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwnwn\" (UniqueName: \"kubernetes.io/projected/aae55ce5-ef05-4f3a-b7f8-719b84b49bc9-kube-api-access-mwnwn\") pod \"crc-debug-smmp7\" (UID: \"aae55ce5-ef05-4f3a-b7f8-719b84b49bc9\") " pod="openshift-must-gather-srq4c/crc-debug-smmp7" Dec 01 10:33:06 crc kubenswrapper[4933]: I1201 10:33:06.554261 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/crc-debug-smmp7" Dec 01 10:33:07 crc kubenswrapper[4933]: I1201 10:33:07.448457 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-srq4c/crc-debug-smmp7" event={"ID":"aae55ce5-ef05-4f3a-b7f8-719b84b49bc9","Type":"ContainerStarted","Data":"c7557a586cac42122dd7d8c47c8690d61af3bdc29c53210d09f7bd2035acefcd"} Dec 01 10:33:19 crc kubenswrapper[4933]: I1201 10:33:19.588361 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-srq4c/crc-debug-smmp7" event={"ID":"aae55ce5-ef05-4f3a-b7f8-719b84b49bc9","Type":"ContainerStarted","Data":"5f74f93a4885eea6a73b28cc6771162156ce1803227cfc53cc657e183bc87b1e"} Dec 01 10:33:19 crc kubenswrapper[4933]: I1201 10:33:19.603431 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-srq4c/crc-debug-smmp7" podStartSLOduration=1.7217263539999998 podStartE2EDuration="13.603406515s" podCreationTimestamp="2025-12-01 10:33:06 +0000 UTC" firstStartedPulling="2025-12-01 10:33:06.605706726 +0000 UTC m=+3677.247430351" lastFinishedPulling="2025-12-01 10:33:18.487386897 +0000 UTC m=+3689.129110512" observedRunningTime="2025-12-01 10:33:19.600838092 +0000 UTC m=+3690.242561707" watchObservedRunningTime="2025-12-01 10:33:19.603406515 +0000 UTC m=+3690.245130130" Dec 01 10:33:51 crc kubenswrapper[4933]: I1201 10:33:51.907236 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ww2kn"] Dec 01 10:33:51 crc kubenswrapper[4933]: I1201 10:33:51.911192 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:33:51 crc kubenswrapper[4933]: I1201 10:33:51.927008 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ww2kn"] Dec 01 10:33:51 crc kubenswrapper[4933]: I1201 10:33:51.999125 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdwm6\" (UniqueName: \"kubernetes.io/projected/0a776435-8504-4dfb-8394-3dc3cbfb87a3-kube-api-access-wdwm6\") pod \"redhat-operators-ww2kn\" (UID: \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\") " pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:33:51 crc kubenswrapper[4933]: I1201 10:33:51.999180 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a776435-8504-4dfb-8394-3dc3cbfb87a3-catalog-content\") pod \"redhat-operators-ww2kn\" (UID: \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\") " pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:33:51 crc kubenswrapper[4933]: I1201 10:33:51.999204 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a776435-8504-4dfb-8394-3dc3cbfb87a3-utilities\") pod \"redhat-operators-ww2kn\" (UID: \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\") " pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:33:52 crc kubenswrapper[4933]: I1201 10:33:52.101662 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdwm6\" (UniqueName: \"kubernetes.io/projected/0a776435-8504-4dfb-8394-3dc3cbfb87a3-kube-api-access-wdwm6\") pod \"redhat-operators-ww2kn\" (UID: \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\") " pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:33:52 crc kubenswrapper[4933]: I1201 10:33:52.101727 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a776435-8504-4dfb-8394-3dc3cbfb87a3-catalog-content\") pod \"redhat-operators-ww2kn\" (UID: \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\") " pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:33:52 crc kubenswrapper[4933]: I1201 10:33:52.101758 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a776435-8504-4dfb-8394-3dc3cbfb87a3-utilities\") pod \"redhat-operators-ww2kn\" (UID: \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\") " pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:33:52 crc kubenswrapper[4933]: I1201 10:33:52.102428 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a776435-8504-4dfb-8394-3dc3cbfb87a3-catalog-content\") pod \"redhat-operators-ww2kn\" (UID: \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\") " pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:33:52 crc kubenswrapper[4933]: I1201 10:33:52.102455 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a776435-8504-4dfb-8394-3dc3cbfb87a3-utilities\") pod \"redhat-operators-ww2kn\" (UID: \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\") " pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:33:52 crc kubenswrapper[4933]: I1201 10:33:52.132712 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdwm6\" (UniqueName: \"kubernetes.io/projected/0a776435-8504-4dfb-8394-3dc3cbfb87a3-kube-api-access-wdwm6\") pod \"redhat-operators-ww2kn\" (UID: \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\") " pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:33:52 crc kubenswrapper[4933]: I1201 10:33:52.232828 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:33:52 crc kubenswrapper[4933]: I1201 10:33:52.763912 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ww2kn"] Dec 01 10:33:52 crc kubenswrapper[4933]: I1201 10:33:52.929996 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ww2kn" event={"ID":"0a776435-8504-4dfb-8394-3dc3cbfb87a3","Type":"ContainerStarted","Data":"e562abba29d21fafa99a55672621949ac3f5df11204d7a6e56c59c745b7f9949"} Dec 01 10:33:53 crc kubenswrapper[4933]: I1201 10:33:53.943438 4933 generic.go:334] "Generic (PLEG): container finished" podID="0a776435-8504-4dfb-8394-3dc3cbfb87a3" containerID="c5f470d65c3b5af56ddd67c594937d8dd1414bb3ee031b7dfb06fc3b5487a9bb" exitCode=0 Dec 01 10:33:53 crc kubenswrapper[4933]: I1201 10:33:53.943565 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ww2kn" event={"ID":"0a776435-8504-4dfb-8394-3dc3cbfb87a3","Type":"ContainerDied","Data":"c5f470d65c3b5af56ddd67c594937d8dd1414bb3ee031b7dfb06fc3b5487a9bb"} Dec 01 10:33:55 crc kubenswrapper[4933]: I1201 10:33:55.967260 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ww2kn" event={"ID":"0a776435-8504-4dfb-8394-3dc3cbfb87a3","Type":"ContainerStarted","Data":"cfca601e6a17ad70026bf87b3dd1fc8b688437229efd58ba72d8a3419aeef831"} Dec 01 10:33:56 crc kubenswrapper[4933]: I1201 10:33:56.980844 4933 generic.go:334] "Generic (PLEG): container finished" podID="0a776435-8504-4dfb-8394-3dc3cbfb87a3" containerID="cfca601e6a17ad70026bf87b3dd1fc8b688437229efd58ba72d8a3419aeef831" exitCode=0 Dec 01 10:33:56 crc kubenswrapper[4933]: I1201 10:33:56.980911 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ww2kn" event={"ID":"0a776435-8504-4dfb-8394-3dc3cbfb87a3","Type":"ContainerDied","Data":"cfca601e6a17ad70026bf87b3dd1fc8b688437229efd58ba72d8a3419aeef831"} Dec 01 10:33:59 crc kubenswrapper[4933]: I1201 10:33:59.005688 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ww2kn" event={"ID":"0a776435-8504-4dfb-8394-3dc3cbfb87a3","Type":"ContainerStarted","Data":"0cafce2fe1436ceed14d86340f1b16b69d815ae12352b3478e96f55b981789bd"} Dec 01 10:33:59 crc kubenswrapper[4933]: I1201 10:33:59.029041 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ww2kn" podStartSLOduration=3.81171694 podStartE2EDuration="8.029014621s" podCreationTimestamp="2025-12-01 10:33:51 +0000 UTC" firstStartedPulling="2025-12-01 10:33:53.945560579 +0000 UTC m=+3724.587284194" lastFinishedPulling="2025-12-01 10:33:58.16285826 +0000 UTC m=+3728.804581875" observedRunningTime="2025-12-01 10:33:59.023126976 +0000 UTC m=+3729.664850591" watchObservedRunningTime="2025-12-01 10:33:59.029014621 +0000 UTC m=+3729.670738236" Dec 01 10:34:01 crc kubenswrapper[4933]: I1201 10:34:01.030170 4933 generic.go:334] "Generic (PLEG): container finished" podID="aae55ce5-ef05-4f3a-b7f8-719b84b49bc9" containerID="5f74f93a4885eea6a73b28cc6771162156ce1803227cfc53cc657e183bc87b1e" exitCode=0 Dec 01 10:34:01 crc kubenswrapper[4933]: I1201 10:34:01.030260 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-srq4c/crc-debug-smmp7" event={"ID":"aae55ce5-ef05-4f3a-b7f8-719b84b49bc9","Type":"ContainerDied","Data":"5f74f93a4885eea6a73b28cc6771162156ce1803227cfc53cc657e183bc87b1e"} Dec 01 10:34:02 crc kubenswrapper[4933]: I1201 10:34:02.137104 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/crc-debug-smmp7" Dec 01 10:34:02 crc kubenswrapper[4933]: I1201 10:34:02.191586 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-srq4c/crc-debug-smmp7"] Dec 01 10:34:02 crc kubenswrapper[4933]: I1201 10:34:02.203037 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-srq4c/crc-debug-smmp7"] Dec 01 10:34:02 crc kubenswrapper[4933]: I1201 10:34:02.233747 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:34:02 crc kubenswrapper[4933]: I1201 10:34:02.233841 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:34:02 crc kubenswrapper[4933]: I1201 10:34:02.239199 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aae55ce5-ef05-4f3a-b7f8-719b84b49bc9-host\") pod \"aae55ce5-ef05-4f3a-b7f8-719b84b49bc9\" (UID: \"aae55ce5-ef05-4f3a-b7f8-719b84b49bc9\") " Dec 01 10:34:02 crc kubenswrapper[4933]: I1201 10:34:02.239357 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae55ce5-ef05-4f3a-b7f8-719b84b49bc9-host" (OuterVolumeSpecName: "host") pod "aae55ce5-ef05-4f3a-b7f8-719b84b49bc9" (UID: "aae55ce5-ef05-4f3a-b7f8-719b84b49bc9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 10:34:02 crc kubenswrapper[4933]: I1201 10:34:02.239518 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwnwn\" (UniqueName: \"kubernetes.io/projected/aae55ce5-ef05-4f3a-b7f8-719b84b49bc9-kube-api-access-mwnwn\") pod \"aae55ce5-ef05-4f3a-b7f8-719b84b49bc9\" (UID: \"aae55ce5-ef05-4f3a-b7f8-719b84b49bc9\") " Dec 01 10:34:02 crc kubenswrapper[4933]: I1201 10:34:02.240209 4933 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aae55ce5-ef05-4f3a-b7f8-719b84b49bc9-host\") on node \"crc\" DevicePath \"\"" Dec 01 10:34:02 crc kubenswrapper[4933]: I1201 10:34:02.245868 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae55ce5-ef05-4f3a-b7f8-719b84b49bc9-kube-api-access-mwnwn" (OuterVolumeSpecName: "kube-api-access-mwnwn") pod "aae55ce5-ef05-4f3a-b7f8-719b84b49bc9" (UID: "aae55ce5-ef05-4f3a-b7f8-719b84b49bc9"). InnerVolumeSpecName "kube-api-access-mwnwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:34:02 crc kubenswrapper[4933]: I1201 10:34:02.343228 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwnwn\" (UniqueName: \"kubernetes.io/projected/aae55ce5-ef05-4f3a-b7f8-719b84b49bc9-kube-api-access-mwnwn\") on node \"crc\" DevicePath \"\"" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.049248 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7557a586cac42122dd7d8c47c8690d61af3bdc29c53210d09f7bd2035acefcd" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.049331 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/crc-debug-smmp7" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.286559 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ww2kn" podUID="0a776435-8504-4dfb-8394-3dc3cbfb87a3" containerName="registry-server" probeResult="failure" output=< Dec 01 10:34:03 crc kubenswrapper[4933]: timeout: failed to connect service ":50051" within 1s Dec 01 10:34:03 crc kubenswrapper[4933]: > Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.396388 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-srq4c/crc-debug-d6kvc"] Dec 01 10:34:03 crc kubenswrapper[4933]: E1201 10:34:03.396859 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae55ce5-ef05-4f3a-b7f8-719b84b49bc9" containerName="container-00" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.396873 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae55ce5-ef05-4f3a-b7f8-719b84b49bc9" containerName="container-00" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.397062 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae55ce5-ef05-4f3a-b7f8-719b84b49bc9" containerName="container-00" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.397779 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/crc-debug-d6kvc" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.464860 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f40b8b9-2460-4a44-8124-165afcb1ab5b-host\") pod \"crc-debug-d6kvc\" (UID: \"1f40b8b9-2460-4a44-8124-165afcb1ab5b\") " pod="openshift-must-gather-srq4c/crc-debug-d6kvc" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.464971 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p44dm\" (UniqueName: \"kubernetes.io/projected/1f40b8b9-2460-4a44-8124-165afcb1ab5b-kube-api-access-p44dm\") pod \"crc-debug-d6kvc\" (UID: \"1f40b8b9-2460-4a44-8124-165afcb1ab5b\") " pod="openshift-must-gather-srq4c/crc-debug-d6kvc" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.566262 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p44dm\" (UniqueName: \"kubernetes.io/projected/1f40b8b9-2460-4a44-8124-165afcb1ab5b-kube-api-access-p44dm\") pod \"crc-debug-d6kvc\" (UID: \"1f40b8b9-2460-4a44-8124-165afcb1ab5b\") " pod="openshift-must-gather-srq4c/crc-debug-d6kvc" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.566516 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f40b8b9-2460-4a44-8124-165afcb1ab5b-host\") pod \"crc-debug-d6kvc\" (UID: \"1f40b8b9-2460-4a44-8124-165afcb1ab5b\") " pod="openshift-must-gather-srq4c/crc-debug-d6kvc" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.566693 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f40b8b9-2460-4a44-8124-165afcb1ab5b-host\") pod \"crc-debug-d6kvc\" (UID: \"1f40b8b9-2460-4a44-8124-165afcb1ab5b\") " pod="openshift-must-gather-srq4c/crc-debug-d6kvc" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.585897 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p44dm\" (UniqueName: \"kubernetes.io/projected/1f40b8b9-2460-4a44-8124-165afcb1ab5b-kube-api-access-p44dm\") pod \"crc-debug-d6kvc\" (UID: \"1f40b8b9-2460-4a44-8124-165afcb1ab5b\") " pod="openshift-must-gather-srq4c/crc-debug-d6kvc" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.682776 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae55ce5-ef05-4f3a-b7f8-719b84b49bc9" path="/var/lib/kubelet/pods/aae55ce5-ef05-4f3a-b7f8-719b84b49bc9/volumes" Dec 01 10:34:03 crc kubenswrapper[4933]: I1201 10:34:03.721463 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/crc-debug-d6kvc" Dec 01 10:34:04 crc kubenswrapper[4933]: I1201 10:34:04.061609 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-srq4c/crc-debug-d6kvc" event={"ID":"1f40b8b9-2460-4a44-8124-165afcb1ab5b","Type":"ContainerStarted","Data":"02316fa9fe2d4ce07f51746f80c5a6ccfb7f0f9c031aaf826f8e72385dde5e39"} Dec 01 10:34:04 crc kubenswrapper[4933]: I1201 10:34:04.062234 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-srq4c/crc-debug-d6kvc" event={"ID":"1f40b8b9-2460-4a44-8124-165afcb1ab5b","Type":"ContainerStarted","Data":"fc86f39e7f107ce29c15969be31e29664f2e57348320caeb11636f7b6170c0de"} Dec 01 10:34:04 crc kubenswrapper[4933]: I1201 10:34:04.085406 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-srq4c/crc-debug-d6kvc" podStartSLOduration=1.085375163 podStartE2EDuration="1.085375163s" podCreationTimestamp="2025-12-01 10:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 10:34:04.080746169 +0000 UTC m=+3734.722469804" watchObservedRunningTime="2025-12-01 10:34:04.085375163 +0000 UTC m=+3734.727098778" Dec 01 10:34:05 crc kubenswrapper[4933]: I1201 10:34:05.076154 4933 generic.go:334] "Generic (PLEG): container finished" podID="1f40b8b9-2460-4a44-8124-165afcb1ab5b" containerID="02316fa9fe2d4ce07f51746f80c5a6ccfb7f0f9c031aaf826f8e72385dde5e39" exitCode=0 Dec 01 10:34:05 crc kubenswrapper[4933]: I1201 10:34:05.076266 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-srq4c/crc-debug-d6kvc" event={"ID":"1f40b8b9-2460-4a44-8124-165afcb1ab5b","Type":"ContainerDied","Data":"02316fa9fe2d4ce07f51746f80c5a6ccfb7f0f9c031aaf826f8e72385dde5e39"} Dec 01 10:34:06 crc kubenswrapper[4933]: I1201 10:34:06.256424 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/crc-debug-d6kvc" Dec 01 10:34:06 crc kubenswrapper[4933]: I1201 10:34:06.295957 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-srq4c/crc-debug-d6kvc"] Dec 01 10:34:06 crc kubenswrapper[4933]: I1201 10:34:06.309394 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-srq4c/crc-debug-d6kvc"] Dec 01 10:34:06 crc kubenswrapper[4933]: I1201 10:34:06.333635 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p44dm\" (UniqueName: \"kubernetes.io/projected/1f40b8b9-2460-4a44-8124-165afcb1ab5b-kube-api-access-p44dm\") pod \"1f40b8b9-2460-4a44-8124-165afcb1ab5b\" (UID: \"1f40b8b9-2460-4a44-8124-165afcb1ab5b\") " Dec 01 10:34:06 crc kubenswrapper[4933]: I1201 10:34:06.333944 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f40b8b9-2460-4a44-8124-165afcb1ab5b-host\") pod \"1f40b8b9-2460-4a44-8124-165afcb1ab5b\" (UID: \"1f40b8b9-2460-4a44-8124-165afcb1ab5b\") " Dec 01 10:34:06 crc kubenswrapper[4933]: I1201 10:34:06.334156 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f40b8b9-2460-4a44-8124-165afcb1ab5b-host" (OuterVolumeSpecName: "host") pod "1f40b8b9-2460-4a44-8124-165afcb1ab5b" (UID: "1f40b8b9-2460-4a44-8124-165afcb1ab5b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 10:34:06 crc kubenswrapper[4933]: I1201 10:34:06.334492 4933 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f40b8b9-2460-4a44-8124-165afcb1ab5b-host\") on node \"crc\" DevicePath \"\"" Dec 01 10:34:06 crc kubenswrapper[4933]: I1201 10:34:06.342855 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f40b8b9-2460-4a44-8124-165afcb1ab5b-kube-api-access-p44dm" (OuterVolumeSpecName: "kube-api-access-p44dm") pod "1f40b8b9-2460-4a44-8124-165afcb1ab5b" (UID: "1f40b8b9-2460-4a44-8124-165afcb1ab5b"). InnerVolumeSpecName "kube-api-access-p44dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:34:06 crc kubenswrapper[4933]: I1201 10:34:06.437019 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p44dm\" (UniqueName: \"kubernetes.io/projected/1f40b8b9-2460-4a44-8124-165afcb1ab5b-kube-api-access-p44dm\") on node \"crc\" DevicePath \"\"" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.105360 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc86f39e7f107ce29c15969be31e29664f2e57348320caeb11636f7b6170c0de" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.105550 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/crc-debug-d6kvc" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.485556 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-srq4c/crc-debug-l97hh"] Dec 01 10:34:07 crc kubenswrapper[4933]: E1201 10:34:07.486108 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f40b8b9-2460-4a44-8124-165afcb1ab5b" containerName="container-00" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.486123 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f40b8b9-2460-4a44-8124-165afcb1ab5b" containerName="container-00" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.486398 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f40b8b9-2460-4a44-8124-165afcb1ab5b" containerName="container-00" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.487177 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/crc-debug-l97hh" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.567164 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f270b28-3b99-4ed7-95d6-051c4ada5ec6-host\") pod \"crc-debug-l97hh\" (UID: \"4f270b28-3b99-4ed7-95d6-051c4ada5ec6\") " pod="openshift-must-gather-srq4c/crc-debug-l97hh" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.567679 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqrlg\" (UniqueName: \"kubernetes.io/projected/4f270b28-3b99-4ed7-95d6-051c4ada5ec6-kube-api-access-rqrlg\") pod \"crc-debug-l97hh\" (UID: \"4f270b28-3b99-4ed7-95d6-051c4ada5ec6\") " pod="openshift-must-gather-srq4c/crc-debug-l97hh" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.672711 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqrlg\" (UniqueName: \"kubernetes.io/projected/4f270b28-3b99-4ed7-95d6-051c4ada5ec6-kube-api-access-rqrlg\") pod \"crc-debug-l97hh\" (UID: \"4f270b28-3b99-4ed7-95d6-051c4ada5ec6\") " pod="openshift-must-gather-srq4c/crc-debug-l97hh" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.673175 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f270b28-3b99-4ed7-95d6-051c4ada5ec6-host\") pod \"crc-debug-l97hh\" (UID: \"4f270b28-3b99-4ed7-95d6-051c4ada5ec6\") " pod="openshift-must-gather-srq4c/crc-debug-l97hh" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.673330 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f270b28-3b99-4ed7-95d6-051c4ada5ec6-host\") pod \"crc-debug-l97hh\" (UID: \"4f270b28-3b99-4ed7-95d6-051c4ada5ec6\") " pod="openshift-must-gather-srq4c/crc-debug-l97hh" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.684209 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f40b8b9-2460-4a44-8124-165afcb1ab5b" path="/var/lib/kubelet/pods/1f40b8b9-2460-4a44-8124-165afcb1ab5b/volumes" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.694289 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqrlg\" (UniqueName: \"kubernetes.io/projected/4f270b28-3b99-4ed7-95d6-051c4ada5ec6-kube-api-access-rqrlg\") pod \"crc-debug-l97hh\" (UID: \"4f270b28-3b99-4ed7-95d6-051c4ada5ec6\") " pod="openshift-must-gather-srq4c/crc-debug-l97hh" Dec 01 10:34:07 crc kubenswrapper[4933]: I1201 10:34:07.806905 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/crc-debug-l97hh" Dec 01 10:34:07 crc kubenswrapper[4933]: W1201 10:34:07.838907 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f270b28_3b99_4ed7_95d6_051c4ada5ec6.slice/crio-f419ea47fa60295d3b8993f642d01e19a2a35f92679f62f3b7009befd171436e WatchSource:0}: Error finding container f419ea47fa60295d3b8993f642d01e19a2a35f92679f62f3b7009befd171436e: Status 404 returned error can't find the container with id f419ea47fa60295d3b8993f642d01e19a2a35f92679f62f3b7009befd171436e Dec 01 10:34:08 crc kubenswrapper[4933]: I1201 10:34:08.119094 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-srq4c/crc-debug-l97hh" event={"ID":"4f270b28-3b99-4ed7-95d6-051c4ada5ec6","Type":"ContainerStarted","Data":"7185d417b81c318af8e86605c0abfd677212df76a8c1e9a219b5ae2f981182de"} Dec 01 10:34:08 crc kubenswrapper[4933]: I1201 10:34:08.119652 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-srq4c/crc-debug-l97hh" event={"ID":"4f270b28-3b99-4ed7-95d6-051c4ada5ec6","Type":"ContainerStarted","Data":"f419ea47fa60295d3b8993f642d01e19a2a35f92679f62f3b7009befd171436e"} Dec 01 10:34:08 crc kubenswrapper[4933]: I1201 10:34:08.183224 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-srq4c/crc-debug-l97hh"] Dec 01 10:34:08 crc kubenswrapper[4933]: I1201 10:34:08.197229 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-srq4c/crc-debug-l97hh"] Dec 01 10:34:09 crc kubenswrapper[4933]: I1201 10:34:09.132208 4933 generic.go:334] "Generic (PLEG): container finished" podID="4f270b28-3b99-4ed7-95d6-051c4ada5ec6" containerID="7185d417b81c318af8e86605c0abfd677212df76a8c1e9a219b5ae2f981182de" exitCode=0 Dec 01 10:34:09 crc kubenswrapper[4933]: I1201 10:34:09.232272 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/crc-debug-l97hh" Dec 01 10:34:09 crc kubenswrapper[4933]: I1201 10:34:09.309396 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqrlg\" (UniqueName: \"kubernetes.io/projected/4f270b28-3b99-4ed7-95d6-051c4ada5ec6-kube-api-access-rqrlg\") pod \"4f270b28-3b99-4ed7-95d6-051c4ada5ec6\" (UID: \"4f270b28-3b99-4ed7-95d6-051c4ada5ec6\") " Dec 01 10:34:09 crc kubenswrapper[4933]: I1201 10:34:09.309707 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f270b28-3b99-4ed7-95d6-051c4ada5ec6-host\") pod \"4f270b28-3b99-4ed7-95d6-051c4ada5ec6\" (UID: \"4f270b28-3b99-4ed7-95d6-051c4ada5ec6\") " Dec 01 10:34:09 crc kubenswrapper[4933]: I1201 10:34:09.310156 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f270b28-3b99-4ed7-95d6-051c4ada5ec6-host" (OuterVolumeSpecName: "host") pod "4f270b28-3b99-4ed7-95d6-051c4ada5ec6" (UID: "4f270b28-3b99-4ed7-95d6-051c4ada5ec6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 10:34:09 crc kubenswrapper[4933]: I1201 10:34:09.310596 4933 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f270b28-3b99-4ed7-95d6-051c4ada5ec6-host\") on node \"crc\" DevicePath \"\"" Dec 01 10:34:09 crc kubenswrapper[4933]: I1201 10:34:09.324868 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f270b28-3b99-4ed7-95d6-051c4ada5ec6-kube-api-access-rqrlg" (OuterVolumeSpecName: "kube-api-access-rqrlg") pod "4f270b28-3b99-4ed7-95d6-051c4ada5ec6" (UID: "4f270b28-3b99-4ed7-95d6-051c4ada5ec6"). InnerVolumeSpecName "kube-api-access-rqrlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:34:09 crc kubenswrapper[4933]: I1201 10:34:09.413382 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqrlg\" (UniqueName: \"kubernetes.io/projected/4f270b28-3b99-4ed7-95d6-051c4ada5ec6-kube-api-access-rqrlg\") on node \"crc\" DevicePath \"\"" Dec 01 10:34:09 crc kubenswrapper[4933]: I1201 10:34:09.681808 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f270b28-3b99-4ed7-95d6-051c4ada5ec6" path="/var/lib/kubelet/pods/4f270b28-3b99-4ed7-95d6-051c4ada5ec6/volumes" Dec 01 10:34:10 crc kubenswrapper[4933]: I1201 10:34:10.144704 4933 scope.go:117] "RemoveContainer" containerID="7185d417b81c318af8e86605c0abfd677212df76a8c1e9a219b5ae2f981182de" Dec 01 10:34:10 crc kubenswrapper[4933]: I1201 10:34:10.144789 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/crc-debug-l97hh" Dec 01 10:34:12 crc kubenswrapper[4933]: I1201 10:34:12.300343 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:34:12 crc kubenswrapper[4933]: I1201 10:34:12.368831 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:34:12 crc kubenswrapper[4933]: I1201 10:34:12.549905 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ww2kn"] Dec 01 10:34:14 crc kubenswrapper[4933]: I1201 10:34:14.183372 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ww2kn" podUID="0a776435-8504-4dfb-8394-3dc3cbfb87a3" containerName="registry-server" containerID="cri-o://0cafce2fe1436ceed14d86340f1b16b69d815ae12352b3478e96f55b981789bd" gracePeriod=2 Dec 01 10:34:14 crc kubenswrapper[4933]: I1201 10:34:14.690396 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:34:14 crc kubenswrapper[4933]: I1201 10:34:14.846113 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdwm6\" (UniqueName: \"kubernetes.io/projected/0a776435-8504-4dfb-8394-3dc3cbfb87a3-kube-api-access-wdwm6\") pod \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\" (UID: \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\") " Dec 01 10:34:14 crc kubenswrapper[4933]: I1201 10:34:14.847118 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a776435-8504-4dfb-8394-3dc3cbfb87a3-catalog-content\") pod \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\" (UID: \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\") " Dec 01 10:34:14 crc kubenswrapper[4933]: I1201 10:34:14.853413 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a776435-8504-4dfb-8394-3dc3cbfb87a3-utilities\") pod \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\" (UID: \"0a776435-8504-4dfb-8394-3dc3cbfb87a3\") " Dec 01 10:34:14 crc kubenswrapper[4933]: I1201 10:34:14.853903 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a776435-8504-4dfb-8394-3dc3cbfb87a3-kube-api-access-wdwm6" (OuterVolumeSpecName: "kube-api-access-wdwm6") pod "0a776435-8504-4dfb-8394-3dc3cbfb87a3" (UID: "0a776435-8504-4dfb-8394-3dc3cbfb87a3"). InnerVolumeSpecName "kube-api-access-wdwm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:34:14 crc kubenswrapper[4933]: I1201 10:34:14.854086 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a776435-8504-4dfb-8394-3dc3cbfb87a3-utilities" (OuterVolumeSpecName: "utilities") pod "0a776435-8504-4dfb-8394-3dc3cbfb87a3" (UID: "0a776435-8504-4dfb-8394-3dc3cbfb87a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:34:14 crc kubenswrapper[4933]: I1201 10:34:14.855764 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdwm6\" (UniqueName: \"kubernetes.io/projected/0a776435-8504-4dfb-8394-3dc3cbfb87a3-kube-api-access-wdwm6\") on node \"crc\" DevicePath \"\"" Dec 01 10:34:14 crc kubenswrapper[4933]: I1201 10:34:14.855798 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a776435-8504-4dfb-8394-3dc3cbfb87a3-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:34:14 crc kubenswrapper[4933]: I1201 10:34:14.978355 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a776435-8504-4dfb-8394-3dc3cbfb87a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a776435-8504-4dfb-8394-3dc3cbfb87a3" (UID: "0a776435-8504-4dfb-8394-3dc3cbfb87a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.059885 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a776435-8504-4dfb-8394-3dc3cbfb87a3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.194704 4933 generic.go:334] "Generic (PLEG): container finished" podID="0a776435-8504-4dfb-8394-3dc3cbfb87a3" containerID="0cafce2fe1436ceed14d86340f1b16b69d815ae12352b3478e96f55b981789bd" exitCode=0 Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.194809 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ww2kn" Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.194830 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ww2kn" event={"ID":"0a776435-8504-4dfb-8394-3dc3cbfb87a3","Type":"ContainerDied","Data":"0cafce2fe1436ceed14d86340f1b16b69d815ae12352b3478e96f55b981789bd"} Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.196773 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ww2kn" event={"ID":"0a776435-8504-4dfb-8394-3dc3cbfb87a3","Type":"ContainerDied","Data":"e562abba29d21fafa99a55672621949ac3f5df11204d7a6e56c59c745b7f9949"} Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.196826 4933 scope.go:117] "RemoveContainer" containerID="0cafce2fe1436ceed14d86340f1b16b69d815ae12352b3478e96f55b981789bd" Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.224141 4933 scope.go:117] "RemoveContainer" containerID="cfca601e6a17ad70026bf87b3dd1fc8b688437229efd58ba72d8a3419aeef831" Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.251416 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ww2kn"] Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.270222 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ww2kn"] Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.279794 4933 scope.go:117] "RemoveContainer" containerID="c5f470d65c3b5af56ddd67c594937d8dd1414bb3ee031b7dfb06fc3b5487a9bb" Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.337533 4933 scope.go:117] "RemoveContainer" containerID="0cafce2fe1436ceed14d86340f1b16b69d815ae12352b3478e96f55b981789bd" Dec 01 10:34:15 crc kubenswrapper[4933]: E1201 10:34:15.339063 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cafce2fe1436ceed14d86340f1b16b69d815ae12352b3478e96f55b981789bd\": container with ID starting with 0cafce2fe1436ceed14d86340f1b16b69d815ae12352b3478e96f55b981789bd not found: ID does not exist" containerID="0cafce2fe1436ceed14d86340f1b16b69d815ae12352b3478e96f55b981789bd" Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.339126 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cafce2fe1436ceed14d86340f1b16b69d815ae12352b3478e96f55b981789bd"} err="failed to get container status \"0cafce2fe1436ceed14d86340f1b16b69d815ae12352b3478e96f55b981789bd\": rpc error: code = NotFound desc = could not find container \"0cafce2fe1436ceed14d86340f1b16b69d815ae12352b3478e96f55b981789bd\": container with ID starting with 0cafce2fe1436ceed14d86340f1b16b69d815ae12352b3478e96f55b981789bd not found: ID does not exist" Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.339165 4933 scope.go:117] "RemoveContainer" containerID="cfca601e6a17ad70026bf87b3dd1fc8b688437229efd58ba72d8a3419aeef831" Dec 01 10:34:15 crc kubenswrapper[4933]: E1201 10:34:15.340143 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfca601e6a17ad70026bf87b3dd1fc8b688437229efd58ba72d8a3419aeef831\": container with ID starting with cfca601e6a17ad70026bf87b3dd1fc8b688437229efd58ba72d8a3419aeef831 not found: ID does not exist" containerID="cfca601e6a17ad70026bf87b3dd1fc8b688437229efd58ba72d8a3419aeef831" Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.340215 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfca601e6a17ad70026bf87b3dd1fc8b688437229efd58ba72d8a3419aeef831"} err="failed to get container status \"cfca601e6a17ad70026bf87b3dd1fc8b688437229efd58ba72d8a3419aeef831\": rpc error: code = NotFound desc = could not find container \"cfca601e6a17ad70026bf87b3dd1fc8b688437229efd58ba72d8a3419aeef831\": container with ID starting with cfca601e6a17ad70026bf87b3dd1fc8b688437229efd58ba72d8a3419aeef831 not found: ID does not exist" Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.340267 4933 scope.go:117] "RemoveContainer" containerID="c5f470d65c3b5af56ddd67c594937d8dd1414bb3ee031b7dfb06fc3b5487a9bb" Dec 01 10:34:15 crc kubenswrapper[4933]: E1201 10:34:15.340696 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5f470d65c3b5af56ddd67c594937d8dd1414bb3ee031b7dfb06fc3b5487a9bb\": container with ID starting with c5f470d65c3b5af56ddd67c594937d8dd1414bb3ee031b7dfb06fc3b5487a9bb not found: ID does not exist" containerID="c5f470d65c3b5af56ddd67c594937d8dd1414bb3ee031b7dfb06fc3b5487a9bb" Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.340752 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f470d65c3b5af56ddd67c594937d8dd1414bb3ee031b7dfb06fc3b5487a9bb"} err="failed to get container status \"c5f470d65c3b5af56ddd67c594937d8dd1414bb3ee031b7dfb06fc3b5487a9bb\": rpc error: code = NotFound desc = could not find container \"c5f470d65c3b5af56ddd67c594937d8dd1414bb3ee031b7dfb06fc3b5487a9bb\": container with ID starting with c5f470d65c3b5af56ddd67c594937d8dd1414bb3ee031b7dfb06fc3b5487a9bb not found: ID does not exist" Dec 01 10:34:15 crc kubenswrapper[4933]: I1201 10:34:15.681772 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a776435-8504-4dfb-8394-3dc3cbfb87a3" path="/var/lib/kubelet/pods/0a776435-8504-4dfb-8394-3dc3cbfb87a3/volumes" Dec 01 10:34:25 crc kubenswrapper[4933]: I1201 10:34:25.029596 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76fbd6d8c8-kdqkn_006eae88-9faa-428f-9d0c-a9fd104b7d06/barbican-api/0.log" Dec 01 10:34:25 crc kubenswrapper[4933]: I1201 10:34:25.200808 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76fbd6d8c8-kdqkn_006eae88-9faa-428f-9d0c-a9fd104b7d06/barbican-api-log/0.log" Dec 01 10:34:25 crc kubenswrapper[4933]: I1201 10:34:25.274006 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b85f87c74-hvnkk_036e08a4-0b6f-498f-a851-723b07c2f687/barbican-keystone-listener/0.log" Dec 01 10:34:25 crc kubenswrapper[4933]: I1201 10:34:25.326654 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b85f87c74-hvnkk_036e08a4-0b6f-498f-a851-723b07c2f687/barbican-keystone-listener-log/0.log" Dec 01 10:34:25 crc kubenswrapper[4933]: I1201 10:34:25.480742 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6dd45957c5-5f9ff_19520328-8d8b-4f49-8c93-82cdfb3623c4/barbican-worker/0.log" Dec 01 10:34:25 crc kubenswrapper[4933]: I1201 10:34:25.557639 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6dd45957c5-5f9ff_19520328-8d8b-4f49-8c93-82cdfb3623c4/barbican-worker-log/0.log" Dec 01 10:34:25 crc kubenswrapper[4933]: I1201 10:34:25.829138 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd_32dfd9a4-8242-4931-a791-de1fc8b1d4a9/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:25 crc kubenswrapper[4933]: I1201 10:34:25.839730 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_77bda02c-44dc-4643-b6d4-4d9f32b260cb/ceilometer-central-agent/0.log" Dec 01 10:34:25 crc kubenswrapper[4933]: I1201 10:34:25.886505 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_77bda02c-44dc-4643-b6d4-4d9f32b260cb/ceilometer-notification-agent/0.log" Dec 01 10:34:26 crc kubenswrapper[4933]: I1201 10:34:26.020025 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_77bda02c-44dc-4643-b6d4-4d9f32b260cb/proxy-httpd/0.log" Dec 01 10:34:26 crc kubenswrapper[4933]: I1201 10:34:26.029877 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_77bda02c-44dc-4643-b6d4-4d9f32b260cb/sg-core/0.log" Dec 01 10:34:26 crc kubenswrapper[4933]: I1201 10:34:26.171167 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_adfbde90-8055-49f0-9ccb-83d1502332cd/cinder-api/0.log" Dec 01 10:34:26 crc kubenswrapper[4933]: I1201 10:34:26.268516 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_adfbde90-8055-49f0-9ccb-83d1502332cd/cinder-api-log/0.log" Dec 01 10:34:26 crc kubenswrapper[4933]: I1201 10:34:26.400015 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_da9e98c6-6da7-4082-8e5d-f8e571486e96/cinder-scheduler/0.log" Dec 01 10:34:26 crc kubenswrapper[4933]: I1201 10:34:26.483479 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_da9e98c6-6da7-4082-8e5d-f8e571486e96/probe/0.log" Dec 01 10:34:26 crc kubenswrapper[4933]: I1201 10:34:26.596666 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8_b4bcbb87-2840-4779-ad5a-9da140a34e9a/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:26 crc kubenswrapper[4933]: I1201 10:34:26.737339 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-r68nv_5cbc2f4a-039d-45ef-9b06-1e1d59f11abb/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:26 crc kubenswrapper[4933]: I1201 10:34:26.839012 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-c5hhz_eab25613-97d2-4420-875f-c5b71e62357f/init/0.log" Dec 01 10:34:26 crc kubenswrapper[4933]: I1201 10:34:26.993708 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-c5hhz_eab25613-97d2-4420-875f-c5b71e62357f/init/0.log" Dec 01 10:34:27 crc kubenswrapper[4933]: I1201 10:34:27.159492 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-c5hhz_eab25613-97d2-4420-875f-c5b71e62357f/dnsmasq-dns/0.log" Dec 01 10:34:27 crc kubenswrapper[4933]: I1201 10:34:27.198668 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-m52lh_85720139-3c78-4370-98fc-31899c778fd9/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:27 crc kubenswrapper[4933]: I1201 10:34:27.372003 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3ded59f9-1443-44e5-93d0-d6fbc126c384/glance-httpd/0.log" Dec 01 10:34:27 crc kubenswrapper[4933]: I1201 10:34:27.480818 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3ded59f9-1443-44e5-93d0-d6fbc126c384/glance-log/0.log" Dec 01 10:34:27 crc kubenswrapper[4933]: I1201 10:34:27.653280 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_45de9ced-9212-422d-9433-2a543d75f37f/glance-httpd/0.log" Dec 01 10:34:27 crc kubenswrapper[4933]: I1201 10:34:27.689973 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_45de9ced-9212-422d-9433-2a543d75f37f/glance-log/0.log" Dec 01 10:34:27 crc kubenswrapper[4933]: I1201 10:34:27.875760 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-75479c6864-2fvz5_000656f6-99fd-43a3-8ade-31b200d0c18a/horizon/1.log" Dec 01 10:34:28 crc kubenswrapper[4933]: I1201 10:34:28.058785 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-75479c6864-2fvz5_000656f6-99fd-43a3-8ade-31b200d0c18a/horizon/0.log" Dec 01 10:34:28 crc kubenswrapper[4933]: I1201 10:34:28.221999 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f_4ddb223b-7a15-4443-8347-19763927dc95/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:28 crc kubenswrapper[4933]: I1201 10:34:28.307861 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-75479c6864-2fvz5_000656f6-99fd-43a3-8ade-31b200d0c18a/horizon-log/0.log" Dec 01 10:34:28 crc kubenswrapper[4933]: I1201 10:34:28.437850 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8qwvq_c212c516-4550-436c-8864-c1ff02cf5b14/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:28 crc kubenswrapper[4933]: I1201 10:34:28.739862 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29409721-9h79w_e4069c3e-e0a1-4aa7-b54d-a040272d3db4/keystone-cron/0.log" Dec 01 10:34:28 crc kubenswrapper[4933]: I1201 10:34:28.784181 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6fd8c5dc6c-czndt_ea85bb6c-bf92-4f66-8068-8ccc7536bdb4/keystone-api/0.log" Dec 01 10:34:28 crc kubenswrapper[4933]: I1201 10:34:28.960860 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_665307e0-fe7b-411a-b394-d383671c8809/kube-state-metrics/0.log" Dec 01 10:34:29 crc kubenswrapper[4933]: I1201 10:34:29.035823 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj_db1900b9-3716-46b2-9761-18a6721bd258/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:29 crc kubenswrapper[4933]: I1201 10:34:29.833057 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5dd758bcf-r4prx_39d17922-6634-497e-9dab-330fcbde16fe/neutron-httpd/0.log" Dec 01 10:34:29 crc kubenswrapper[4933]: I1201 10:34:29.849943 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq_5242466a-3061-4db5-b9dd-77f6bff70350/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:29 crc kubenswrapper[4933]: I1201 10:34:29.854125 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5dd758bcf-r4prx_39d17922-6634-497e-9dab-330fcbde16fe/neutron-api/0.log" Dec 01 10:34:30 crc kubenswrapper[4933]: I1201 10:34:30.544756 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b9107505-ec32-479e-b76e-1ffa605a3bfb/nova-cell0-conductor-conductor/0.log" Dec 01 10:34:30 crc kubenswrapper[4933]: I1201 10:34:30.659058 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265/nova-api-log/0.log" Dec 01 10:34:30 crc kubenswrapper[4933]: I1201 10:34:30.832273 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e/nova-cell1-conductor-conductor/0.log" Dec 01 10:34:30 crc kubenswrapper[4933]: I1201 10:34:30.888791 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265/nova-api-api/0.log" Dec 01 10:34:31 crc kubenswrapper[4933]: I1201 10:34:31.648428 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-69txc_e9ac33c2-a83f-4ec8-8458-b366a2aebd5d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:31 crc kubenswrapper[4933]: I1201 10:34:31.689189 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6c08ac8e-6639-413d-8534-625fa6adc9ae/nova-cell1-novncproxy-novncproxy/0.log" Dec 01 10:34:32 crc kubenswrapper[4933]: I1201 10:34:32.010934 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_55378084-cbcf-4c0c-8bdc-c9d2f026ca3c/nova-metadata-log/0.log" Dec 01 10:34:32 crc kubenswrapper[4933]: I1201 10:34:32.240791 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_55a605ff-7d52-4d80-bd32-6301d0c696c1/mysql-bootstrap/0.log" Dec 01 10:34:32 crc kubenswrapper[4933]: I1201 10:34:32.274289 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0283919c-d007-4102-a7dd-33bd1388971c/nova-scheduler-scheduler/0.log" Dec 01 10:34:32 crc kubenswrapper[4933]: I1201 10:34:32.519114 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_55a605ff-7d52-4d80-bd32-6301d0c696c1/mysql-bootstrap/0.log" Dec 01 10:34:32 crc kubenswrapper[4933]: I1201 10:34:32.530956 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_55a605ff-7d52-4d80-bd32-6301d0c696c1/galera/0.log" Dec 01 10:34:32 crc kubenswrapper[4933]: I1201 10:34:32.760993 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1ba27bfa-74d8-4df5-8217-666a02132516/mysql-bootstrap/0.log" Dec 01 10:34:33 crc kubenswrapper[4933]: I1201 10:34:33.414847 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_55378084-cbcf-4c0c-8bdc-c9d2f026ca3c/nova-metadata-metadata/0.log" Dec 01 10:34:33 crc kubenswrapper[4933]: I1201 10:34:33.716138 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7ad829f6-4a62-4ed2-a99c-30aed564d585/openstackclient/0.log" Dec 01 10:34:33 crc kubenswrapper[4933]: I1201 10:34:33.752472 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1ba27bfa-74d8-4df5-8217-666a02132516/mysql-bootstrap/0.log" Dec 01 10:34:33 crc kubenswrapper[4933]: I1201 10:34:33.754134 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1ba27bfa-74d8-4df5-8217-666a02132516/galera/0.log" Dec 01 10:34:33 crc kubenswrapper[4933]: I1201 10:34:33.989100 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5tgrr_a1f2d08e-94f8-47ec-9e7e-a4722b71b609/ovn-controller/0.log" Dec 01 10:34:34 crc kubenswrapper[4933]: I1201 10:34:34.097194 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qxwjl_1db280e5-ecd7-44cf-933a-2d55ba6f7b42/openstack-network-exporter/0.log" Dec 01 10:34:34 crc kubenswrapper[4933]: I1201 10:34:34.255810 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8bgh_90984aaa-e287-4038-bc14-16debb186a8d/ovsdb-server-init/0.log" Dec 01 10:34:34 crc kubenswrapper[4933]: I1201 10:34:34.499037 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8bgh_90984aaa-e287-4038-bc14-16debb186a8d/ovs-vswitchd/0.log" Dec 01 10:34:34 crc kubenswrapper[4933]: I1201 10:34:34.518808 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8bgh_90984aaa-e287-4038-bc14-16debb186a8d/ovsdb-server-init/0.log" Dec 01 10:34:34 crc kubenswrapper[4933]: I1201 10:34:34.577776 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8bgh_90984aaa-e287-4038-bc14-16debb186a8d/ovsdb-server/0.log" Dec 01 10:34:34 crc kubenswrapper[4933]: I1201 10:34:34.836299 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-s6cr9_f053536c-b281-4870-b827-93d59be1fdbd/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:34 crc kubenswrapper[4933]: I1201 10:34:34.879811 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9dc35964-1186-483a-8904-c98af6497c53/openstack-network-exporter/0.log" Dec 01 10:34:34 crc kubenswrapper[4933]: I1201 10:34:34.903602 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9dc35964-1186-483a-8904-c98af6497c53/ovn-northd/0.log" Dec 01 10:34:35 crc kubenswrapper[4933]: I1201 10:34:35.139579 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fa3b7950-1309-47f9-9372-7932d0ef0ced/openstack-network-exporter/0.log" Dec 01 10:34:35 crc kubenswrapper[4933]: I1201 10:34:35.194194 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fa3b7950-1309-47f9-9372-7932d0ef0ced/ovsdbserver-nb/0.log" Dec 01 10:34:35 crc kubenswrapper[4933]: I1201 10:34:35.381119 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_75ef03e2-9526-4184-a3cf-2a5bb26fec93/openstack-network-exporter/0.log" Dec 01 10:34:35 crc kubenswrapper[4933]: I1201 10:34:35.410132 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_75ef03e2-9526-4184-a3cf-2a5bb26fec93/ovsdbserver-sb/0.log" Dec 01 10:34:35 crc kubenswrapper[4933]: I1201 10:34:35.575842 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55788c59f6-zd5sp_c2caece4-8b42-4e68-9a5d-096ef39b4120/placement-api/0.log" Dec 01 10:34:35 crc kubenswrapper[4933]: I1201 10:34:35.763177 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55788c59f6-zd5sp_c2caece4-8b42-4e68-9a5d-096ef39b4120/placement-log/0.log" Dec 01 10:34:35 crc kubenswrapper[4933]: I1201 10:34:35.790085 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3ac85014-ac29-45f6-9461-a8c02c4fcca4/setup-container/0.log" Dec 01 10:34:36 crc kubenswrapper[4933]: I1201 10:34:36.033778 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3ac85014-ac29-45f6-9461-a8c02c4fcca4/setup-container/0.log" Dec 01 10:34:36 crc kubenswrapper[4933]: I1201 10:34:36.037518 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3ac85014-ac29-45f6-9461-a8c02c4fcca4/rabbitmq/0.log" Dec 01 10:34:36 crc kubenswrapper[4933]: I1201 10:34:36.141962 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4b205db3-c812-4f4e-a81c-3662f2ca0cf1/setup-container/0.log" Dec 01 10:34:36 crc kubenswrapper[4933]: I1201 10:34:36.346500 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4b205db3-c812-4f4e-a81c-3662f2ca0cf1/setup-container/0.log" Dec 01 10:34:36 crc kubenswrapper[4933]: I1201 10:34:36.436888 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4b205db3-c812-4f4e-a81c-3662f2ca0cf1/rabbitmq/0.log" Dec 01 10:34:36 crc kubenswrapper[4933]: I1201 10:34:36.503606 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r_6a84779c-7b89-4a0c-9ea0-34d0af08979d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:36 crc kubenswrapper[4933]: I1201 10:34:36.704633 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-tkd78_d9cb819c-73da-4725-aaca-3cac78b4670f/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:36 crc kubenswrapper[4933]: I1201 10:34:36.748535 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-grj94_af261b96-cdfe-4987-8689-bec0506287d2/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:36 crc kubenswrapper[4933]: I1201 10:34:36.945977 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-78bfv_c57f613c-9cc6-447a-acf6-11a2d381862f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:37 crc kubenswrapper[4933]: I1201 10:34:37.040344 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gt2rs_d2735060-c736-46d2-882c-60c0a7e96bc8/ssh-known-hosts-edpm-deployment/0.log" Dec 01 10:34:37 crc kubenswrapper[4933]: I1201 10:34:37.347600 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c69467867-495s4_4ceb0496-c824-4d20-8a63-43bc6aa47f97/proxy-server/0.log" Dec 01 10:34:37 crc kubenswrapper[4933]: I1201 10:34:37.409729 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c69467867-495s4_4ceb0496-c824-4d20-8a63-43bc6aa47f97/proxy-httpd/0.log" Dec 01 10:34:37 crc kubenswrapper[4933]: I1201 10:34:37.594765 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/account-auditor/0.log" Dec 01 10:34:37 crc kubenswrapper[4933]: I1201 10:34:37.637093 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wpgrh_fa13e197-3320-4314-86ee-a1b90292ab1d/swift-ring-rebalance/0.log" Dec 01 10:34:37 crc kubenswrapper[4933]: I1201 10:34:37.664362 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/account-reaper/0.log" Dec 01 10:34:37 crc kubenswrapper[4933]: I1201 10:34:37.844927 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/account-replicator/0.log" Dec 01 10:34:37 crc kubenswrapper[4933]: I1201 10:34:37.889356 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/container-auditor/0.log" Dec 01 10:34:37 crc kubenswrapper[4933]: I1201 10:34:37.898061 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/account-server/0.log" Dec 01 10:34:38 crc kubenswrapper[4933]: I1201 10:34:38.012425 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/container-replicator/0.log" Dec 01 10:34:38 crc kubenswrapper[4933]: I1201 10:34:38.097619 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/container-server/0.log" Dec 01 10:34:38 crc kubenswrapper[4933]: I1201 10:34:38.133446 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/container-updater/0.log" Dec 01 10:34:38 crc kubenswrapper[4933]: I1201 10:34:38.245717 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/object-auditor/0.log" Dec 01 10:34:38 crc kubenswrapper[4933]: I1201 10:34:38.313780 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/object-expirer/0.log" Dec 01 10:34:38 crc kubenswrapper[4933]: I1201 10:34:38.345678 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/object-replicator/0.log" Dec 01 10:34:38 crc kubenswrapper[4933]: I1201 10:34:38.414050 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/object-server/0.log" Dec 01 10:34:38 crc kubenswrapper[4933]: I1201 10:34:38.465589 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/object-updater/0.log" Dec 01 10:34:38 crc kubenswrapper[4933]: I1201 10:34:38.545765 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/rsync/0.log" Dec 01 10:34:38 crc kubenswrapper[4933]: I1201 10:34:38.623742 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/swift-recon-cron/0.log" Dec 01 10:34:38 crc kubenswrapper[4933]: I1201 10:34:38.771798 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qxssd_eab7fc1e-f6ce-41a3-9a65-1773b1c2e823/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:38 crc kubenswrapper[4933]: I1201 10:34:38.888504 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_c272594d-4d61-490a-a44d-0a82106c9a1f/tempest-tests-tempest-tests-runner/0.log" Dec 01 10:34:39 crc kubenswrapper[4933]: I1201 10:34:39.013121 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_265d92de-a6f0-45ea-9175-15a4bb7c1716/test-operator-logs-container/0.log" Dec 01 10:34:39 crc kubenswrapper[4933]: I1201 10:34:39.122370 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl_af544a25-e743-4edc-8d80-228c9da3ce45/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:34:41 crc kubenswrapper[4933]: I1201 10:34:41.740690 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:34:41 crc kubenswrapper[4933]: I1201 10:34:41.741678 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:34:49 crc kubenswrapper[4933]: I1201 10:34:49.621910 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_12be8eec-c6b1-4606-83de-e19ac2ab17eb/memcached/0.log" Dec 01 10:35:10 crc kubenswrapper[4933]: I1201 10:35:10.372618 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/util/0.log" Dec 01 10:35:10 crc kubenswrapper[4933]: I1201 10:35:10.555922 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/pull/0.log" Dec 01 10:35:10 crc kubenswrapper[4933]: I1201 10:35:10.587665 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/pull/0.log" Dec 01 10:35:10 crc kubenswrapper[4933]: I1201 10:35:10.599790 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/util/0.log" Dec 01 10:35:10 crc kubenswrapper[4933]: I1201 10:35:10.919525 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/pull/0.log" Dec 01 10:35:10 crc kubenswrapper[4933]: I1201 10:35:10.926328 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/util/0.log" Dec 01 10:35:10 crc kubenswrapper[4933]: I1201 10:35:10.947833 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/extract/0.log" Dec 01 10:35:11 crc kubenswrapper[4933]: I1201 10:35:11.235421 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-9fvkr_e1f14086-5509-48fe-a88c-c2717009ef93/manager/0.log" Dec 01 10:35:11 crc kubenswrapper[4933]: I1201 10:35:11.264601 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-9fvkr_e1f14086-5509-48fe-a88c-c2717009ef93/kube-rbac-proxy/0.log" Dec 01 10:35:11 crc kubenswrapper[4933]: I1201 10:35:11.317752 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-fntw7_19b19877-3b1b-40f9-9501-329bceb4756a/kube-rbac-proxy/0.log" Dec 01 10:35:11 crc kubenswrapper[4933]: I1201 10:35:11.468456 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-fntw7_19b19877-3b1b-40f9-9501-329bceb4756a/manager/0.log" Dec 01 10:35:11 crc kubenswrapper[4933]: I1201 10:35:11.499405 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-cpthv_9c52b072-b528-4fee-88b8-c878150882b1/kube-rbac-proxy/0.log" Dec 01 10:35:11 crc kubenswrapper[4933]: I1201 10:35:11.566059 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-cpthv_9c52b072-b528-4fee-88b8-c878150882b1/manager/0.log" Dec 01 10:35:11 crc kubenswrapper[4933]: I1201 10:35:11.740912 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:35:11 crc kubenswrapper[4933]: I1201 10:35:11.740992 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:35:11 crc kubenswrapper[4933]: I1201 10:35:11.760980 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-4bfbh_e88cb01f-84f3-4cdc-9d5d-f283f883868e/kube-rbac-proxy/0.log" Dec 01 10:35:11 crc kubenswrapper[4933]: I1201 10:35:11.841696 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-4bfbh_e88cb01f-84f3-4cdc-9d5d-f283f883868e/manager/0.log" Dec 01 10:35:12 crc kubenswrapper[4933]: I1201 10:35:12.024930 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-6q6m6_9564306d-6348-40b4-9e3e-42fcd5778383/kube-rbac-proxy/0.log" Dec 01 10:35:12 crc kubenswrapper[4933]: I1201 10:35:12.037720 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-6q6m6_9564306d-6348-40b4-9e3e-42fcd5778383/manager/0.log" Dec 01 10:35:12 crc kubenswrapper[4933]: I1201 10:35:12.171080 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-gd76x_96d92174-459d-4657-bbbb-a56271877411/kube-rbac-proxy/0.log" Dec 01 10:35:12 crc kubenswrapper[4933]: I1201 10:35:12.531526 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-gd76x_96d92174-459d-4657-bbbb-a56271877411/manager/0.log" Dec 01 10:35:12 crc kubenswrapper[4933]: I1201 10:35:12.645919 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-hcgq6_7dd39823-94d3-4a96-90e4-ada73223c4b0/kube-rbac-proxy/0.log" Dec 01 10:35:12 crc kubenswrapper[4933]: I1201 10:35:12.803058 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-hcgq6_7dd39823-94d3-4a96-90e4-ada73223c4b0/manager/0.log" Dec 01 10:35:12 crc kubenswrapper[4933]: I1201 10:35:12.852531 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-7rjkh_eefc3c9c-eade-4b6e-8902-6936d481cb1b/kube-rbac-proxy/0.log" Dec 01 10:35:12 crc kubenswrapper[4933]: I1201 10:35:12.927733 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-7rjkh_eefc3c9c-eade-4b6e-8902-6936d481cb1b/manager/0.log" Dec 01 10:35:13 crc kubenswrapper[4933]: I1201 10:35:13.078838 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-94gt2_b303701b-30bc-4779-b1fa-f574bd6cce65/kube-rbac-proxy/0.log" Dec 01 10:35:13 crc kubenswrapper[4933]: I1201 10:35:13.172181 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-94gt2_b303701b-30bc-4779-b1fa-f574bd6cce65/manager/0.log" Dec 01 10:35:13 crc kubenswrapper[4933]: I1201 10:35:13.372730 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-mlmgw_b925c282-ee4d-4b1f-8f18-d3baa2f8faef/kube-rbac-proxy/0.log" Dec 01 10:35:13 crc kubenswrapper[4933]: I1201 10:35:13.441455 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-mlmgw_b925c282-ee4d-4b1f-8f18-d3baa2f8faef/manager/0.log" Dec 01 10:35:13 crc kubenswrapper[4933]: I1201 10:35:13.525115 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-n5pnz_e32cc225-71ff-4edf-8e11-ac7abf7afe27/kube-rbac-proxy/0.log" Dec 01 10:35:13 crc kubenswrapper[4933]: I1201 10:35:13.609660 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-n5pnz_e32cc225-71ff-4edf-8e11-ac7abf7afe27/manager/0.log" Dec 01 10:35:13 crc kubenswrapper[4933]: I1201 10:35:13.660692 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-8rdcd_c10a734c-970c-42dd-aa15-a27dd68941e1/kube-rbac-proxy/0.log" Dec 01 10:35:13 crc kubenswrapper[4933]: I1201 10:35:13.785018 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-8rdcd_c10a734c-970c-42dd-aa15-a27dd68941e1/manager/0.log" Dec 01 10:35:14 crc kubenswrapper[4933]: I1201 10:35:14.025414 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-w8tzl_9a84bd2a-303d-492c-b507-61fa590290d1/kube-rbac-proxy/0.log" Dec 01 10:35:14 crc kubenswrapper[4933]: I1201 10:35:14.239953 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-lxkkf_a8f52d69-0961-4ac0-b41f-200400bfcf2b/kube-rbac-proxy/0.log" Dec 01 10:35:14 crc kubenswrapper[4933]: I1201 10:35:14.295719 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-w8tzl_9a84bd2a-303d-492c-b507-61fa590290d1/manager/0.log" Dec 01 10:35:14 crc kubenswrapper[4933]: I1201 10:35:14.309200 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-lxkkf_a8f52d69-0961-4ac0-b41f-200400bfcf2b/manager/0.log" Dec 01 10:35:14 crc kubenswrapper[4933]: I1201 10:35:14.506851 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd45jrln_96699ea8-fc44-4dc2-a6f2-f2109d091097/kube-rbac-proxy/0.log" Dec 01 10:35:14 crc kubenswrapper[4933]: I1201 10:35:14.554578 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd45jrln_96699ea8-fc44-4dc2-a6f2-f2109d091097/manager/0.log" Dec 01 10:35:15 crc kubenswrapper[4933]: I1201 10:35:15.693400 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-vnqpf_61a68407-8b55-4951-aa9b-8f2348e5b3b1/registry-server/0.log" Dec 01 10:35:15 crc kubenswrapper[4933]: I1201 10:35:15.925799 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-7c9rv_2550654d-3a84-420e-bcaa-75a2f3c88dec/kube-rbac-proxy/0.log" Dec 01 10:35:15 crc kubenswrapper[4933]: I1201 10:35:15.987223 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-66ff97f68c-jqgr4_2a175cf5-68b0-46ab-9e64-646af044da97/operator/0.log" Dec 01 10:35:16 crc kubenswrapper[4933]: I1201 10:35:16.037748 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-7c9rv_2550654d-3a84-420e-bcaa-75a2f3c88dec/manager/0.log" Dec 01 10:35:16 crc kubenswrapper[4933]: I1201 10:35:16.513988 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-w92f7_83542dc0-212d-4257-935c-aced954e9157/manager/0.log" Dec 01 10:35:16 crc kubenswrapper[4933]: I1201 10:35:16.515924 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-w92f7_83542dc0-212d-4257-935c-aced954e9157/kube-rbac-proxy/0.log" Dec 01 10:35:16 crc kubenswrapper[4933]: I1201 10:35:16.668923 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-547ff67f67-9fnsd_c976f88e-97eb-4223-9475-252505656b6d/manager/0.log" Dec 01 10:35:16 crc kubenswrapper[4933]: I1201 10:35:16.739855 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-dhlrp_3aa898e5-9bf0-4baf-9c71-261229f0baf0/operator/0.log" Dec 01 10:35:16 crc kubenswrapper[4933]: I1201 10:35:16.773096 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-frx4s_0bd5ca15-126a-4c31-814b-b0390dc01b3c/kube-rbac-proxy/0.log" Dec 01 10:35:16 crc kubenswrapper[4933]: I1201 10:35:16.847541 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-frx4s_0bd5ca15-126a-4c31-814b-b0390dc01b3c/manager/0.log" Dec 01 10:35:16 crc kubenswrapper[4933]: I1201 10:35:16.950321 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-b2gcw_6c192ef8-b774-486f-bb69-d73e8b89989e/kube-rbac-proxy/0.log" Dec 01 10:35:17 crc kubenswrapper[4933]: I1201 10:35:17.068048 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-b2gcw_6c192ef8-b774-486f-bb69-d73e8b89989e/manager/0.log" Dec 01 10:35:17 crc kubenswrapper[4933]: I1201 10:35:17.127581 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-w9jcs_c807406f-80fb-422b-a68f-e9706da2ac42/kube-rbac-proxy/0.log" Dec 01 10:35:17 crc kubenswrapper[4933]: I1201 10:35:17.208335 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-w9jcs_c807406f-80fb-422b-a68f-e9706da2ac42/manager/0.log" Dec 01 10:35:17 crc kubenswrapper[4933]: I1201 10:35:17.956097 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-bmhhw_48cfc1f9-dbcb-4ff7-88b7-aa7709648627/kube-rbac-proxy/0.log" Dec 01 10:35:17 crc kubenswrapper[4933]: I1201 10:35:17.989048 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-bmhhw_48cfc1f9-dbcb-4ff7-88b7-aa7709648627/manager/0.log" Dec 01 10:35:40 crc kubenswrapper[4933]: I1201 10:35:40.206321 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6dwg7_a7bd5924-9a3f-43cf-99b1-2d5d20975f81/control-plane-machine-set-operator/0.log" Dec 01 10:35:40 crc kubenswrapper[4933]: I1201 10:35:40.441455 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xdrhr_29cdc67d-6d2a-44b2-bd31-3634aff7f52e/kube-rbac-proxy/0.log" Dec 01 10:35:40 crc kubenswrapper[4933]: I1201 10:35:40.458447 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xdrhr_29cdc67d-6d2a-44b2-bd31-3634aff7f52e/machine-api-operator/0.log" Dec 01 10:35:41 crc kubenswrapper[4933]: I1201 10:35:41.740816 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:35:41 crc kubenswrapper[4933]: I1201 10:35:41.741274 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:35:41 crc kubenswrapper[4933]: I1201 10:35:41.741349 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 10:35:41 crc kubenswrapper[4933]: I1201 10:35:41.742325 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 10:35:41 crc kubenswrapper[4933]: I1201 10:35:41.742396 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" gracePeriod=600 Dec 01 10:35:41 crc kubenswrapper[4933]: E1201 10:35:41.885636 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:35:42 crc kubenswrapper[4933]: I1201 10:35:42.280467 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" exitCode=0 Dec 01 10:35:42 crc kubenswrapper[4933]: I1201 10:35:42.280567 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3"} Dec 01 10:35:42 crc kubenswrapper[4933]: I1201 10:35:42.280669 4933 scope.go:117] "RemoveContainer" containerID="cf410e45128ad7bd3f91c85ab4abae6c39329b92f8a9232778c31943a501f4a2" Dec 01 10:35:42 crc kubenswrapper[4933]: I1201 10:35:42.282072 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:35:42 crc kubenswrapper[4933]: E1201 10:35:42.282647 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:35:54 crc kubenswrapper[4933]: I1201 10:35:54.669216 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:35:54 crc kubenswrapper[4933]: E1201 10:35:54.670174 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:35:55 crc kubenswrapper[4933]: I1201 10:35:55.424626 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-ts7sf_b9576096-fd1b-4f6e-95c9-37517c77cca1/cert-manager-controller/0.log" Dec 01 10:35:55 crc kubenswrapper[4933]: I1201 10:35:55.611405 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-5b7g6_d32d2e97-81db-4119-b9c6-a71b974a56a8/cert-manager-cainjector/0.log" Dec 01 10:35:55 crc kubenswrapper[4933]: I1201 10:35:55.689452 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-x8fvf_c54106fe-eb4b-4f41-afce-e9fde8067ec8/cert-manager-webhook/0.log" Dec 01 10:36:06 crc kubenswrapper[4933]: I1201 10:36:06.667594 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:36:06 crc kubenswrapper[4933]: E1201 10:36:06.668786 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:36:10 crc kubenswrapper[4933]: I1201 10:36:10.256964 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-vh9f4_575c5872-9ab6-4a15-86b7-9dfbe33c0171/nmstate-console-plugin/0.log" Dec 01 10:36:10 crc kubenswrapper[4933]: I1201 10:36:10.479229 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-kqnxq_18b298e1-b8df-4272-a30c-496424de8d76/kube-rbac-proxy/0.log" Dec 01 10:36:10 crc kubenswrapper[4933]: I1201 10:36:10.482373 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sn9zt_b95afca6-9346-403c-b1ab-d04d36537c40/nmstate-handler/0.log" Dec 01 10:36:10 crc kubenswrapper[4933]: I1201 10:36:10.611376 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-kqnxq_18b298e1-b8df-4272-a30c-496424de8d76/nmstate-metrics/0.log" Dec 01 10:36:10 crc kubenswrapper[4933]: I1201 10:36:10.693513 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-l6tck_c3ce2443-f386-4203-8c92-f0961122fb6b/nmstate-operator/0.log" Dec 01 10:36:10 crc kubenswrapper[4933]: I1201 10:36:10.890404 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-8bmq8_ccec4c34-0e07-4b0a-a36a-a6bc46982fa6/nmstate-webhook/0.log" Dec 01 10:36:18 crc kubenswrapper[4933]: I1201 10:36:18.668321 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:36:18 crc kubenswrapper[4933]: E1201 10:36:18.671928 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:36:27 crc kubenswrapper[4933]: I1201 10:36:27.314393 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-rczrx_e9c82311-fa12-41b3-a4e2-50bca0b1c23f/kube-rbac-proxy/0.log" Dec 01 10:36:27 crc kubenswrapper[4933]: I1201 10:36:27.369689 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-rczrx_e9c82311-fa12-41b3-a4e2-50bca0b1c23f/controller/0.log" Dec 01 10:36:27 crc kubenswrapper[4933]: I1201 10:36:27.509152 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-frr-files/0.log" Dec 01 10:36:27 crc kubenswrapper[4933]: I1201 10:36:27.771950 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-frr-files/0.log" Dec 01 10:36:27 crc kubenswrapper[4933]: I1201 10:36:27.773510 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-reloader/0.log" Dec 01 10:36:27 crc kubenswrapper[4933]: I1201 10:36:27.809551 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-reloader/0.log" Dec 01 10:36:27 crc kubenswrapper[4933]: I1201 10:36:27.846364 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-metrics/0.log" Dec 01 10:36:28 crc kubenswrapper[4933]: I1201 10:36:28.003227 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-reloader/0.log" Dec 01 10:36:28 crc kubenswrapper[4933]: I1201 10:36:28.021484 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-frr-files/0.log" Dec 01 10:36:28 crc kubenswrapper[4933]: I1201 10:36:28.041723 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-metrics/0.log" Dec 01 10:36:28 crc kubenswrapper[4933]: I1201 10:36:28.095380 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-metrics/0.log" Dec 01 10:36:28 crc kubenswrapper[4933]: I1201 10:36:28.293026 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-reloader/0.log" Dec 01 10:36:28 crc kubenswrapper[4933]: I1201 10:36:28.302669 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-frr-files/0.log" Dec 01 10:36:28 crc kubenswrapper[4933]: I1201 10:36:28.328151 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-metrics/0.log" Dec 01 10:36:28 crc kubenswrapper[4933]: I1201 10:36:28.342194 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/controller/0.log" Dec 01 10:36:28 crc kubenswrapper[4933]: I1201 10:36:28.500686 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/frr-metrics/0.log" Dec 01 10:36:28 crc kubenswrapper[4933]: I1201 10:36:28.517145 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/kube-rbac-proxy/0.log" Dec 01 10:36:28 crc kubenswrapper[4933]: I1201 10:36:28.606235 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/kube-rbac-proxy-frr/0.log" Dec 01 10:36:28 crc kubenswrapper[4933]: I1201 10:36:28.801217 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/reloader/0.log" Dec 01 10:36:28 crc kubenswrapper[4933]: I1201 10:36:28.855040 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-kfn29_bbb71b8b-46bf-4013-93d7-f3a58f98b8f0/frr-k8s-webhook-server/0.log" Dec 01 10:36:29 crc kubenswrapper[4933]: I1201 10:36:29.176174 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d44559b9d-mg6pw_7c2e1948-244d-4059-9f94-0675dfaa751f/manager/0.log" Dec 01 10:36:29 crc kubenswrapper[4933]: I1201 10:36:29.319793 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-64d64d5bf5-hc5zj_db30f3a8-b953-4818-8999-c247744b8c1a/webhook-server/0.log" Dec 01 10:36:29 crc kubenswrapper[4933]: I1201 10:36:29.476360 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-crbpg_7ab2e723-8322-4328-8afc-4b13397a538c/kube-rbac-proxy/0.log" Dec 01 10:36:29 crc kubenswrapper[4933]: I1201 10:36:29.675920 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:36:29 crc kubenswrapper[4933]: E1201 10:36:29.676269 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:36:30 crc kubenswrapper[4933]: I1201 10:36:30.188965 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-crbpg_7ab2e723-8322-4328-8afc-4b13397a538c/speaker/0.log" Dec 01 10:36:30 crc kubenswrapper[4933]: I1201 10:36:30.276829 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/frr/0.log" Dec 01 10:36:41 crc kubenswrapper[4933]: I1201 10:36:41.668652 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:36:41 crc kubenswrapper[4933]: E1201 10:36:41.669995 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:36:46 crc kubenswrapper[4933]: I1201 10:36:46.208602 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/util/0.log" Dec 01 10:36:46 crc kubenswrapper[4933]: I1201 10:36:46.452089 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/pull/0.log" Dec 01 10:36:46 crc kubenswrapper[4933]: I1201 10:36:46.489588 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/pull/0.log" Dec 01 10:36:46 crc kubenswrapper[4933]: I1201 10:36:46.493501 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/util/0.log" Dec 01 10:36:46 crc kubenswrapper[4933]: I1201 10:36:46.728811 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/pull/0.log" Dec 01 10:36:46 crc kubenswrapper[4933]: I1201 10:36:46.729352 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/extract/0.log" Dec 01 10:36:46 crc kubenswrapper[4933]: I1201 10:36:46.731951 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/util/0.log" Dec 01 10:36:46 crc kubenswrapper[4933]: I1201 10:36:46.927160 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/util/0.log" Dec 01 10:36:47 crc kubenswrapper[4933]: I1201 10:36:47.148754 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/util/0.log" Dec 01 10:36:47 crc kubenswrapper[4933]: I1201 10:36:47.150314 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/pull/0.log" Dec 01 10:36:47 crc kubenswrapper[4933]: I1201 10:36:47.171364 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/pull/0.log" Dec 01 10:36:47 crc kubenswrapper[4933]: I1201 10:36:47.380265 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/util/0.log" Dec 01 10:36:47 crc kubenswrapper[4933]: I1201 10:36:47.404727 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/extract/0.log" Dec 01 10:36:47 crc kubenswrapper[4933]: I1201 10:36:47.404797 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/pull/0.log" Dec 01 10:36:47 crc kubenswrapper[4933]: I1201 10:36:47.582657 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/extract-utilities/0.log" Dec 01 10:36:47 crc kubenswrapper[4933]: I1201 10:36:47.749080 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/extract-utilities/0.log" Dec 01 10:36:47 crc kubenswrapper[4933]: I1201 10:36:47.761602 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/extract-content/0.log" Dec 01 10:36:47 crc kubenswrapper[4933]: I1201 10:36:47.844423 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/extract-content/0.log" Dec 01 10:36:48 crc kubenswrapper[4933]: I1201 10:36:48.027225 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/extract-utilities/0.log" Dec 01 10:36:48 crc kubenswrapper[4933]: I1201 10:36:48.103758 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/extract-content/0.log" Dec 01 10:36:48 crc kubenswrapper[4933]: I1201 10:36:48.354289 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/extract-utilities/0.log" Dec 01 10:36:48 crc kubenswrapper[4933]: I1201 10:36:48.683813 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/extract-utilities/0.log" Dec 01 10:36:48 crc kubenswrapper[4933]: I1201 10:36:48.704986 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/extract-content/0.log" Dec 01 10:36:48 crc kubenswrapper[4933]: I1201 10:36:48.713394 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/extract-content/0.log" Dec 01 10:36:48 crc kubenswrapper[4933]: I1201 10:36:48.974846 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/registry-server/0.log" Dec 01 10:36:49 crc kubenswrapper[4933]: I1201 10:36:49.021747 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/extract-content/0.log" Dec 01 10:36:49 crc kubenswrapper[4933]: I1201 10:36:49.026474 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/extract-utilities/0.log" Dec 01 10:36:49 crc kubenswrapper[4933]: I1201 10:36:49.262472 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/registry-server/0.log" Dec 01 10:36:49 crc kubenswrapper[4933]: I1201 10:36:49.298803 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8w4rb_972d2150-cea0-4c55-9be0-bc7022d630e2/marketplace-operator/0.log" Dec 01 10:36:49 crc kubenswrapper[4933]: I1201 10:36:49.477211 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/extract-utilities/0.log" Dec 01 10:36:49 crc kubenswrapper[4933]: I1201 10:36:49.735194 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/extract-content/0.log" Dec 01 10:36:49 crc kubenswrapper[4933]: I1201 10:36:49.757999 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/extract-content/0.log" Dec 01 10:36:49 crc kubenswrapper[4933]: I1201 10:36:49.780154 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/extract-utilities/0.log" Dec 01 10:36:50 crc kubenswrapper[4933]: I1201 10:36:50.001989 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/extract-content/0.log" Dec 01 10:36:50 crc kubenswrapper[4933]: I1201 10:36:50.001989 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/extract-utilities/0.log" Dec 01 10:36:50 crc kubenswrapper[4933]: I1201 10:36:50.180152 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/registry-server/0.log" Dec 01 10:36:50 crc kubenswrapper[4933]: I1201 10:36:50.284702 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/extract-utilities/0.log" Dec 01 10:36:50 crc kubenswrapper[4933]: I1201 10:36:50.445385 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/extract-content/0.log" Dec 01 10:36:50 crc kubenswrapper[4933]: I1201 10:36:50.478769 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/extract-content/0.log" Dec 01 10:36:50 crc kubenswrapper[4933]: I1201 10:36:50.498510 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/extract-utilities/0.log" Dec 01 10:36:50 crc kubenswrapper[4933]: I1201 10:36:50.637593 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/extract-utilities/0.log" Dec 01 10:36:50 crc kubenswrapper[4933]: I1201 10:36:50.665271 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/extract-content/0.log" Dec 01 10:36:50 crc kubenswrapper[4933]: I1201 10:36:50.923383 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/registry-server/0.log" Dec 01 10:36:54 crc kubenswrapper[4933]: I1201 10:36:54.668427 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:36:54 crc kubenswrapper[4933]: E1201 10:36:54.669147 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:37:09 crc kubenswrapper[4933]: I1201 10:37:09.677816 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:37:09 crc kubenswrapper[4933]: E1201 10:37:09.678740 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:37:11 crc kubenswrapper[4933]: E1201 10:37:11.739661 4933 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.142:38294->38.102.83.142:45527: write tcp 38.102.83.142:38294->38.102.83.142:45527: write: broken pipe Dec 01 10:37:22 crc kubenswrapper[4933]: I1201 10:37:22.667804 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:37:22 crc kubenswrapper[4933]: E1201 10:37:22.668846 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:37:34 crc kubenswrapper[4933]: I1201 10:37:34.667980 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:37:34 crc kubenswrapper[4933]: E1201 10:37:34.669158 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:37:45 crc kubenswrapper[4933]: I1201 10:37:45.669516 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:37:45 crc kubenswrapper[4933]: E1201 10:37:45.670820 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:37:59 crc kubenswrapper[4933]: I1201 10:37:59.674934 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:37:59 crc kubenswrapper[4933]: E1201 10:37:59.675954 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.053650 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gmw4z"] Dec 01 10:38:01 crc kubenswrapper[4933]: E1201 10:38:01.054183 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a776435-8504-4dfb-8394-3dc3cbfb87a3" containerName="extract-utilities" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.054199 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a776435-8504-4dfb-8394-3dc3cbfb87a3" containerName="extract-utilities" Dec 01 10:38:01 crc kubenswrapper[4933]: E1201 10:38:01.054218 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f270b28-3b99-4ed7-95d6-051c4ada5ec6" containerName="container-00" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.054224 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f270b28-3b99-4ed7-95d6-051c4ada5ec6" containerName="container-00" Dec 01 10:38:01 crc kubenswrapper[4933]: E1201 10:38:01.054274 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a776435-8504-4dfb-8394-3dc3cbfb87a3" containerName="extract-content" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.054283 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a776435-8504-4dfb-8394-3dc3cbfb87a3" containerName="extract-content" Dec 01 10:38:01 crc kubenswrapper[4933]: E1201 10:38:01.054297 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a776435-8504-4dfb-8394-3dc3cbfb87a3" containerName="registry-server" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.054356 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a776435-8504-4dfb-8394-3dc3cbfb87a3" containerName="registry-server" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.054596 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a776435-8504-4dfb-8394-3dc3cbfb87a3" containerName="registry-server" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.054613 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f270b28-3b99-4ed7-95d6-051c4ada5ec6" containerName="container-00" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.092280 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmw4z"] Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.092515 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.249907 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc03dfee-fa99-45af-8f9d-95abf9f085b5-catalog-content\") pod \"community-operators-gmw4z\" (UID: \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\") " pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.250287 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnjtn\" (UniqueName: \"kubernetes.io/projected/cc03dfee-fa99-45af-8f9d-95abf9f085b5-kube-api-access-fnjtn\") pod \"community-operators-gmw4z\" (UID: \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\") " pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.250416 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc03dfee-fa99-45af-8f9d-95abf9f085b5-utilities\") pod \"community-operators-gmw4z\" (UID: \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\") " pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.352213 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc03dfee-fa99-45af-8f9d-95abf9f085b5-utilities\") pod \"community-operators-gmw4z\" (UID: \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\") " pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.352404 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc03dfee-fa99-45af-8f9d-95abf9f085b5-catalog-content\") pod \"community-operators-gmw4z\" (UID: \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\") " pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.352437 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnjtn\" (UniqueName: \"kubernetes.io/projected/cc03dfee-fa99-45af-8f9d-95abf9f085b5-kube-api-access-fnjtn\") pod \"community-operators-gmw4z\" (UID: \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\") " pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.353295 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc03dfee-fa99-45af-8f9d-95abf9f085b5-catalog-content\") pod \"community-operators-gmw4z\" (UID: \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\") " pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.353484 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc03dfee-fa99-45af-8f9d-95abf9f085b5-utilities\") pod \"community-operators-gmw4z\" (UID: \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\") " pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.379240 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnjtn\" (UniqueName: \"kubernetes.io/projected/cc03dfee-fa99-45af-8f9d-95abf9f085b5-kube-api-access-fnjtn\") pod \"community-operators-gmw4z\" (UID: \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\") " pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.431782 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:01 crc kubenswrapper[4933]: I1201 10:38:01.988734 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmw4z"] Dec 01 10:38:02 crc kubenswrapper[4933]: I1201 10:38:02.189770 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw4z" event={"ID":"cc03dfee-fa99-45af-8f9d-95abf9f085b5","Type":"ContainerStarted","Data":"312cd1b9dc2ea3f10f1fd5bca9397ce523d6d4404dfec3001faec71ae6c8ee20"} Dec 01 10:38:03 crc kubenswrapper[4933]: I1201 10:38:03.199951 4933 generic.go:334] "Generic (PLEG): container finished" podID="cc03dfee-fa99-45af-8f9d-95abf9f085b5" containerID="04ea1eb7844e6d11215a421827830665b397a29ceafccaefbfcb7439157501d4" exitCode=0 Dec 01 10:38:03 crc kubenswrapper[4933]: I1201 10:38:03.200350 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw4z" event={"ID":"cc03dfee-fa99-45af-8f9d-95abf9f085b5","Type":"ContainerDied","Data":"04ea1eb7844e6d11215a421827830665b397a29ceafccaefbfcb7439157501d4"} Dec 01 10:38:03 crc kubenswrapper[4933]: I1201 10:38:03.204254 4933 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 10:38:04 crc kubenswrapper[4933]: I1201 10:38:04.214174 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw4z" event={"ID":"cc03dfee-fa99-45af-8f9d-95abf9f085b5","Type":"ContainerStarted","Data":"9cfb65ccd80337b78677a931703dbfd060a3672dee33a062e13f5e7c81717c6a"} Dec 01 10:38:05 crc kubenswrapper[4933]: I1201 10:38:05.227609 4933 generic.go:334] "Generic (PLEG): container finished" podID="cc03dfee-fa99-45af-8f9d-95abf9f085b5" containerID="9cfb65ccd80337b78677a931703dbfd060a3672dee33a062e13f5e7c81717c6a" exitCode=0 Dec 01 10:38:05 crc kubenswrapper[4933]: I1201 10:38:05.227823 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw4z" event={"ID":"cc03dfee-fa99-45af-8f9d-95abf9f085b5","Type":"ContainerDied","Data":"9cfb65ccd80337b78677a931703dbfd060a3672dee33a062e13f5e7c81717c6a"} Dec 01 10:38:06 crc kubenswrapper[4933]: I1201 10:38:06.238240 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw4z" event={"ID":"cc03dfee-fa99-45af-8f9d-95abf9f085b5","Type":"ContainerStarted","Data":"82b9d1b7e3dd66631652218655e7ede3c517115db51a410ca121fda8f28bc7e9"} Dec 01 10:38:06 crc kubenswrapper[4933]: I1201 10:38:06.269529 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gmw4z" podStartSLOduration=2.800989978 podStartE2EDuration="5.269510512s" podCreationTimestamp="2025-12-01 10:38:01 +0000 UTC" firstStartedPulling="2025-12-01 10:38:03.203930259 +0000 UTC m=+3973.845653874" lastFinishedPulling="2025-12-01 10:38:05.672450793 +0000 UTC m=+3976.314174408" observedRunningTime="2025-12-01 10:38:06.266380544 +0000 UTC m=+3976.908104179" watchObservedRunningTime="2025-12-01 10:38:06.269510512 +0000 UTC m=+3976.911234127" Dec 01 10:38:11 crc kubenswrapper[4933]: I1201 10:38:11.433087 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:11 crc kubenswrapper[4933]: I1201 10:38:11.433968 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:11 crc kubenswrapper[4933]: I1201 10:38:11.505632 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:12 crc kubenswrapper[4933]: I1201 10:38:12.349993 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:12 crc kubenswrapper[4933]: I1201 10:38:12.407436 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmw4z"] Dec 01 10:38:12 crc kubenswrapper[4933]: I1201 10:38:12.668284 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:38:12 crc kubenswrapper[4933]: E1201 10:38:12.668515 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:38:14 crc kubenswrapper[4933]: I1201 10:38:14.320403 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gmw4z" podUID="cc03dfee-fa99-45af-8f9d-95abf9f085b5" containerName="registry-server" containerID="cri-o://82b9d1b7e3dd66631652218655e7ede3c517115db51a410ca121fda8f28bc7e9" gracePeriod=2 Dec 01 10:38:14 crc kubenswrapper[4933]: I1201 10:38:14.918204 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.064388 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnjtn\" (UniqueName: \"kubernetes.io/projected/cc03dfee-fa99-45af-8f9d-95abf9f085b5-kube-api-access-fnjtn\") pod \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\" (UID: \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\") " Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.064540 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc03dfee-fa99-45af-8f9d-95abf9f085b5-utilities\") pod \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\" (UID: \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\") " Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.064753 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc03dfee-fa99-45af-8f9d-95abf9f085b5-catalog-content\") pod \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\" (UID: \"cc03dfee-fa99-45af-8f9d-95abf9f085b5\") " Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.065420 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc03dfee-fa99-45af-8f9d-95abf9f085b5-utilities" (OuterVolumeSpecName: "utilities") pod "cc03dfee-fa99-45af-8f9d-95abf9f085b5" (UID: "cc03dfee-fa99-45af-8f9d-95abf9f085b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.072547 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc03dfee-fa99-45af-8f9d-95abf9f085b5-kube-api-access-fnjtn" (OuterVolumeSpecName: "kube-api-access-fnjtn") pod "cc03dfee-fa99-45af-8f9d-95abf9f085b5" (UID: "cc03dfee-fa99-45af-8f9d-95abf9f085b5"). InnerVolumeSpecName "kube-api-access-fnjtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.117766 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc03dfee-fa99-45af-8f9d-95abf9f085b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc03dfee-fa99-45af-8f9d-95abf9f085b5" (UID: "cc03dfee-fa99-45af-8f9d-95abf9f085b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.167975 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc03dfee-fa99-45af-8f9d-95abf9f085b5-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.168011 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc03dfee-fa99-45af-8f9d-95abf9f085b5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.168023 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnjtn\" (UniqueName: \"kubernetes.io/projected/cc03dfee-fa99-45af-8f9d-95abf9f085b5-kube-api-access-fnjtn\") on node \"crc\" DevicePath \"\"" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.332190 4933 generic.go:334] "Generic (PLEG): container finished" podID="cc03dfee-fa99-45af-8f9d-95abf9f085b5" containerID="82b9d1b7e3dd66631652218655e7ede3c517115db51a410ca121fda8f28bc7e9" exitCode=0 Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.332243 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw4z" event={"ID":"cc03dfee-fa99-45af-8f9d-95abf9f085b5","Type":"ContainerDied","Data":"82b9d1b7e3dd66631652218655e7ede3c517115db51a410ca121fda8f28bc7e9"} Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.332342 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw4z" event={"ID":"cc03dfee-fa99-45af-8f9d-95abf9f085b5","Type":"ContainerDied","Data":"312cd1b9dc2ea3f10f1fd5bca9397ce523d6d4404dfec3001faec71ae6c8ee20"} Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.332366 4933 scope.go:117] "RemoveContainer" containerID="82b9d1b7e3dd66631652218655e7ede3c517115db51a410ca121fda8f28bc7e9" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.333525 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmw4z" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.358869 4933 scope.go:117] "RemoveContainer" containerID="9cfb65ccd80337b78677a931703dbfd060a3672dee33a062e13f5e7c81717c6a" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.386081 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmw4z"] Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.394065 4933 scope.go:117] "RemoveContainer" containerID="04ea1eb7844e6d11215a421827830665b397a29ceafccaefbfcb7439157501d4" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.396509 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gmw4z"] Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.430220 4933 scope.go:117] "RemoveContainer" containerID="82b9d1b7e3dd66631652218655e7ede3c517115db51a410ca121fda8f28bc7e9" Dec 01 10:38:15 crc kubenswrapper[4933]: E1201 10:38:15.430843 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b9d1b7e3dd66631652218655e7ede3c517115db51a410ca121fda8f28bc7e9\": container with ID starting with 82b9d1b7e3dd66631652218655e7ede3c517115db51a410ca121fda8f28bc7e9 not found: ID does not exist" containerID="82b9d1b7e3dd66631652218655e7ede3c517115db51a410ca121fda8f28bc7e9" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.430898 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b9d1b7e3dd66631652218655e7ede3c517115db51a410ca121fda8f28bc7e9"} err="failed to get container status \"82b9d1b7e3dd66631652218655e7ede3c517115db51a410ca121fda8f28bc7e9\": rpc error: code = NotFound desc = could not find container \"82b9d1b7e3dd66631652218655e7ede3c517115db51a410ca121fda8f28bc7e9\": container with ID starting with 82b9d1b7e3dd66631652218655e7ede3c517115db51a410ca121fda8f28bc7e9 not found: ID does not exist" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.430939 4933 scope.go:117] "RemoveContainer" containerID="9cfb65ccd80337b78677a931703dbfd060a3672dee33a062e13f5e7c81717c6a" Dec 01 10:38:15 crc kubenswrapper[4933]: E1201 10:38:15.431447 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cfb65ccd80337b78677a931703dbfd060a3672dee33a062e13f5e7c81717c6a\": container with ID starting with 9cfb65ccd80337b78677a931703dbfd060a3672dee33a062e13f5e7c81717c6a not found: ID does not exist" containerID="9cfb65ccd80337b78677a931703dbfd060a3672dee33a062e13f5e7c81717c6a" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.431480 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfb65ccd80337b78677a931703dbfd060a3672dee33a062e13f5e7c81717c6a"} err="failed to get container status \"9cfb65ccd80337b78677a931703dbfd060a3672dee33a062e13f5e7c81717c6a\": rpc error: code = NotFound desc = could not find container \"9cfb65ccd80337b78677a931703dbfd060a3672dee33a062e13f5e7c81717c6a\": container with ID starting with 9cfb65ccd80337b78677a931703dbfd060a3672dee33a062e13f5e7c81717c6a not found: ID does not exist" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.431502 4933 scope.go:117] "RemoveContainer" containerID="04ea1eb7844e6d11215a421827830665b397a29ceafccaefbfcb7439157501d4" Dec 01 10:38:15 crc kubenswrapper[4933]: E1201 10:38:15.431783 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04ea1eb7844e6d11215a421827830665b397a29ceafccaefbfcb7439157501d4\": container with ID starting with 04ea1eb7844e6d11215a421827830665b397a29ceafccaefbfcb7439157501d4 not found: ID does not exist" containerID="04ea1eb7844e6d11215a421827830665b397a29ceafccaefbfcb7439157501d4" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.431812 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04ea1eb7844e6d11215a421827830665b397a29ceafccaefbfcb7439157501d4"} err="failed to get container status \"04ea1eb7844e6d11215a421827830665b397a29ceafccaefbfcb7439157501d4\": rpc error: code = NotFound desc = could not find container \"04ea1eb7844e6d11215a421827830665b397a29ceafccaefbfcb7439157501d4\": container with ID starting with 04ea1eb7844e6d11215a421827830665b397a29ceafccaefbfcb7439157501d4 not found: ID does not exist" Dec 01 10:38:15 crc kubenswrapper[4933]: I1201 10:38:15.681233 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc03dfee-fa99-45af-8f9d-95abf9f085b5" path="/var/lib/kubelet/pods/cc03dfee-fa99-45af-8f9d-95abf9f085b5/volumes" Dec 01 10:38:27 crc kubenswrapper[4933]: I1201 10:38:27.668474 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:38:27 crc kubenswrapper[4933]: E1201 10:38:27.670002 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:38:42 crc kubenswrapper[4933]: I1201 10:38:42.667821 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:38:42 crc kubenswrapper[4933]: E1201 10:38:42.668806 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:38:43 crc kubenswrapper[4933]: I1201 10:38:43.678704 4933 generic.go:334] "Generic (PLEG): container finished" podID="66506bb4-0cd2-435d-a8a3-efd4baf4a548" containerID="8c2db9e9dd3f021dfed91ae25e13bb67cd61940a46ade445478d4db42cb9a568" exitCode=0 Dec 01 10:38:43 crc kubenswrapper[4933]: I1201 10:38:43.678921 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-srq4c/must-gather-8vkvd" event={"ID":"66506bb4-0cd2-435d-a8a3-efd4baf4a548","Type":"ContainerDied","Data":"8c2db9e9dd3f021dfed91ae25e13bb67cd61940a46ade445478d4db42cb9a568"} Dec 01 10:38:43 crc kubenswrapper[4933]: I1201 10:38:43.679713 4933 scope.go:117] "RemoveContainer" containerID="8c2db9e9dd3f021dfed91ae25e13bb67cd61940a46ade445478d4db42cb9a568" Dec 01 10:38:44 crc kubenswrapper[4933]: I1201 10:38:44.569362 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-srq4c_must-gather-8vkvd_66506bb4-0cd2-435d-a8a3-efd4baf4a548/gather/0.log" Dec 01 10:38:51 crc kubenswrapper[4933]: I1201 10:38:51.895786 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-srq4c/must-gather-8vkvd"] Dec 01 10:38:51 crc kubenswrapper[4933]: I1201 10:38:51.897026 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-srq4c/must-gather-8vkvd" podUID="66506bb4-0cd2-435d-a8a3-efd4baf4a548" containerName="copy" containerID="cri-o://fd511bffa4845f96bd89b80049a8d84f9d820caa42aab1b9abf25f040d43e8f4" gracePeriod=2 Dec 01 10:38:51 crc kubenswrapper[4933]: I1201 10:38:51.910124 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-srq4c/must-gather-8vkvd"] Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.461683 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-srq4c_must-gather-8vkvd_66506bb4-0cd2-435d-a8a3-efd4baf4a548/copy/0.log" Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.462907 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/must-gather-8vkvd" Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.539718 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/66506bb4-0cd2-435d-a8a3-efd4baf4a548-must-gather-output\") pod \"66506bb4-0cd2-435d-a8a3-efd4baf4a548\" (UID: \"66506bb4-0cd2-435d-a8a3-efd4baf4a548\") " Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.539883 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmsfj\" (UniqueName: \"kubernetes.io/projected/66506bb4-0cd2-435d-a8a3-efd4baf4a548-kube-api-access-wmsfj\") pod \"66506bb4-0cd2-435d-a8a3-efd4baf4a548\" (UID: \"66506bb4-0cd2-435d-a8a3-efd4baf4a548\") " Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.546081 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66506bb4-0cd2-435d-a8a3-efd4baf4a548-kube-api-access-wmsfj" (OuterVolumeSpecName: "kube-api-access-wmsfj") pod "66506bb4-0cd2-435d-a8a3-efd4baf4a548" (UID: "66506bb4-0cd2-435d-a8a3-efd4baf4a548"). InnerVolumeSpecName "kube-api-access-wmsfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.641324 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmsfj\" (UniqueName: \"kubernetes.io/projected/66506bb4-0cd2-435d-a8a3-efd4baf4a548-kube-api-access-wmsfj\") on node \"crc\" DevicePath \"\"" Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.691645 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66506bb4-0cd2-435d-a8a3-efd4baf4a548-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "66506bb4-0cd2-435d-a8a3-efd4baf4a548" (UID: "66506bb4-0cd2-435d-a8a3-efd4baf4a548"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.743609 4933 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/66506bb4-0cd2-435d-a8a3-efd4baf4a548-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.778108 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-srq4c_must-gather-8vkvd_66506bb4-0cd2-435d-a8a3-efd4baf4a548/copy/0.log" Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.778942 4933 generic.go:334] "Generic (PLEG): container finished" podID="66506bb4-0cd2-435d-a8a3-efd4baf4a548" containerID="fd511bffa4845f96bd89b80049a8d84f9d820caa42aab1b9abf25f040d43e8f4" exitCode=143 Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.779023 4933 scope.go:117] "RemoveContainer" containerID="fd511bffa4845f96bd89b80049a8d84f9d820caa42aab1b9abf25f040d43e8f4" Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.779087 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-srq4c/must-gather-8vkvd" Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.799018 4933 scope.go:117] "RemoveContainer" containerID="8c2db9e9dd3f021dfed91ae25e13bb67cd61940a46ade445478d4db42cb9a568" Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.831402 4933 scope.go:117] "RemoveContainer" containerID="fd511bffa4845f96bd89b80049a8d84f9d820caa42aab1b9abf25f040d43e8f4" Dec 01 10:38:52 crc kubenswrapper[4933]: E1201 10:38:52.832020 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd511bffa4845f96bd89b80049a8d84f9d820caa42aab1b9abf25f040d43e8f4\": container with ID starting with fd511bffa4845f96bd89b80049a8d84f9d820caa42aab1b9abf25f040d43e8f4 not found: ID does not exist" containerID="fd511bffa4845f96bd89b80049a8d84f9d820caa42aab1b9abf25f040d43e8f4" Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.832063 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd511bffa4845f96bd89b80049a8d84f9d820caa42aab1b9abf25f040d43e8f4"} err="failed to get container status \"fd511bffa4845f96bd89b80049a8d84f9d820caa42aab1b9abf25f040d43e8f4\": rpc error: code = NotFound desc = could not find container \"fd511bffa4845f96bd89b80049a8d84f9d820caa42aab1b9abf25f040d43e8f4\": container with ID starting with fd511bffa4845f96bd89b80049a8d84f9d820caa42aab1b9abf25f040d43e8f4 not found: ID does not exist" Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.832095 4933 scope.go:117] "RemoveContainer" containerID="8c2db9e9dd3f021dfed91ae25e13bb67cd61940a46ade445478d4db42cb9a568" Dec 01 10:38:52 crc kubenswrapper[4933]: E1201 10:38:52.832667 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2db9e9dd3f021dfed91ae25e13bb67cd61940a46ade445478d4db42cb9a568\": container with ID starting with 8c2db9e9dd3f021dfed91ae25e13bb67cd61940a46ade445478d4db42cb9a568 not found: ID does not exist" containerID="8c2db9e9dd3f021dfed91ae25e13bb67cd61940a46ade445478d4db42cb9a568" Dec 01 10:38:52 crc kubenswrapper[4933]: I1201 10:38:52.832724 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2db9e9dd3f021dfed91ae25e13bb67cd61940a46ade445478d4db42cb9a568"} err="failed to get container status \"8c2db9e9dd3f021dfed91ae25e13bb67cd61940a46ade445478d4db42cb9a568\": rpc error: code = NotFound desc = could not find container \"8c2db9e9dd3f021dfed91ae25e13bb67cd61940a46ade445478d4db42cb9a568\": container with ID starting with 8c2db9e9dd3f021dfed91ae25e13bb67cd61940a46ade445478d4db42cb9a568 not found: ID does not exist" Dec 01 10:38:53 crc kubenswrapper[4933]: I1201 10:38:53.683407 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66506bb4-0cd2-435d-a8a3-efd4baf4a548" path="/var/lib/kubelet/pods/66506bb4-0cd2-435d-a8a3-efd4baf4a548/volumes" Dec 01 10:38:56 crc kubenswrapper[4933]: I1201 10:38:56.667452 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:38:56 crc kubenswrapper[4933]: E1201 10:38:56.668348 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:39:07 crc kubenswrapper[4933]: I1201 10:39:07.668942 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:39:07 crc kubenswrapper[4933]: E1201 10:39:07.671680 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.668204 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:39:21 crc kubenswrapper[4933]: E1201 10:39:21.669153 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.711713 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mqz4f"] Dec 01 10:39:21 crc kubenswrapper[4933]: E1201 10:39:21.712042 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66506bb4-0cd2-435d-a8a3-efd4baf4a548" containerName="gather" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.712058 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="66506bb4-0cd2-435d-a8a3-efd4baf4a548" containerName="gather" Dec 01 10:39:21 crc kubenswrapper[4933]: E1201 10:39:21.712084 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66506bb4-0cd2-435d-a8a3-efd4baf4a548" containerName="copy" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.712090 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="66506bb4-0cd2-435d-a8a3-efd4baf4a548" containerName="copy" Dec 01 10:39:21 crc kubenswrapper[4933]: E1201 10:39:21.712104 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc03dfee-fa99-45af-8f9d-95abf9f085b5" containerName="registry-server" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.712110 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc03dfee-fa99-45af-8f9d-95abf9f085b5" containerName="registry-server" Dec 01 10:39:21 crc kubenswrapper[4933]: E1201 10:39:21.712125 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc03dfee-fa99-45af-8f9d-95abf9f085b5" containerName="extract-utilities" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.712131 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc03dfee-fa99-45af-8f9d-95abf9f085b5" containerName="extract-utilities" Dec 01 10:39:21 crc kubenswrapper[4933]: E1201 10:39:21.712142 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc03dfee-fa99-45af-8f9d-95abf9f085b5" containerName="extract-content" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.712148 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc03dfee-fa99-45af-8f9d-95abf9f085b5" containerName="extract-content" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.712769 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="66506bb4-0cd2-435d-a8a3-efd4baf4a548" containerName="gather" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.712801 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc03dfee-fa99-45af-8f9d-95abf9f085b5" containerName="registry-server" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.712825 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="66506bb4-0cd2-435d-a8a3-efd4baf4a548" containerName="copy" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.714901 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqz4f"] Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.714989 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.900966 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwtns\" (UniqueName: \"kubernetes.io/projected/c00c1834-23c4-4f9f-af4a-6a7611f45800-kube-api-access-mwtns\") pod \"redhat-marketplace-mqz4f\" (UID: \"c00c1834-23c4-4f9f-af4a-6a7611f45800\") " pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.901434 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c1834-23c4-4f9f-af4a-6a7611f45800-catalog-content\") pod \"redhat-marketplace-mqz4f\" (UID: \"c00c1834-23c4-4f9f-af4a-6a7611f45800\") " pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:21 crc kubenswrapper[4933]: I1201 10:39:21.901501 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c1834-23c4-4f9f-af4a-6a7611f45800-utilities\") pod \"redhat-marketplace-mqz4f\" (UID: \"c00c1834-23c4-4f9f-af4a-6a7611f45800\") " pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:22 crc kubenswrapper[4933]: I1201 10:39:22.003700 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwtns\" (UniqueName: \"kubernetes.io/projected/c00c1834-23c4-4f9f-af4a-6a7611f45800-kube-api-access-mwtns\") pod \"redhat-marketplace-mqz4f\" (UID: \"c00c1834-23c4-4f9f-af4a-6a7611f45800\") " pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:22 crc kubenswrapper[4933]: I1201 10:39:22.003756 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c1834-23c4-4f9f-af4a-6a7611f45800-catalog-content\") pod \"redhat-marketplace-mqz4f\" (UID: \"c00c1834-23c4-4f9f-af4a-6a7611f45800\") " pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:22 crc kubenswrapper[4933]: I1201 10:39:22.003799 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c1834-23c4-4f9f-af4a-6a7611f45800-utilities\") pod \"redhat-marketplace-mqz4f\" (UID: \"c00c1834-23c4-4f9f-af4a-6a7611f45800\") " pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:22 crc kubenswrapper[4933]: I1201 10:39:22.004380 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c1834-23c4-4f9f-af4a-6a7611f45800-utilities\") pod \"redhat-marketplace-mqz4f\" (UID: \"c00c1834-23c4-4f9f-af4a-6a7611f45800\") " pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:22 crc kubenswrapper[4933]: I1201 10:39:22.004470 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c1834-23c4-4f9f-af4a-6a7611f45800-catalog-content\") pod \"redhat-marketplace-mqz4f\" (UID: \"c00c1834-23c4-4f9f-af4a-6a7611f45800\") " pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:22 crc kubenswrapper[4933]: I1201 10:39:22.027158 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwtns\" (UniqueName: \"kubernetes.io/projected/c00c1834-23c4-4f9f-af4a-6a7611f45800-kube-api-access-mwtns\") pod \"redhat-marketplace-mqz4f\" (UID: \"c00c1834-23c4-4f9f-af4a-6a7611f45800\") " pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:22 crc kubenswrapper[4933]: I1201 10:39:22.048065 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:22 crc kubenswrapper[4933]: I1201 10:39:22.560676 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqz4f"] Dec 01 10:39:23 crc kubenswrapper[4933]: I1201 10:39:23.100817 4933 generic.go:334] "Generic (PLEG): container finished" podID="c00c1834-23c4-4f9f-af4a-6a7611f45800" containerID="acad68f61bd5b0054d0da3ecb773bac322cbe0ee4fb978ecd3e33883961d9532" exitCode=0 Dec 01 10:39:23 crc kubenswrapper[4933]: I1201 10:39:23.100892 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqz4f" event={"ID":"c00c1834-23c4-4f9f-af4a-6a7611f45800","Type":"ContainerDied","Data":"acad68f61bd5b0054d0da3ecb773bac322cbe0ee4fb978ecd3e33883961d9532"} Dec 01 10:39:23 crc kubenswrapper[4933]: I1201 10:39:23.101211 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqz4f" event={"ID":"c00c1834-23c4-4f9f-af4a-6a7611f45800","Type":"ContainerStarted","Data":"1670690c7ad0d3cafc0cac2ec949ca640949a30d840052ffb4d940f369472419"} Dec 01 10:39:24 crc kubenswrapper[4933]: I1201 10:39:24.113203 4933 generic.go:334] "Generic (PLEG): container finished" podID="c00c1834-23c4-4f9f-af4a-6a7611f45800" containerID="7db76d1b4f3bf0ec15487417d706514c96c22e250746100ada71b8f5dfed67f0" exitCode=0 Dec 01 10:39:24 crc kubenswrapper[4933]: I1201 10:39:24.113409 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqz4f" event={"ID":"c00c1834-23c4-4f9f-af4a-6a7611f45800","Type":"ContainerDied","Data":"7db76d1b4f3bf0ec15487417d706514c96c22e250746100ada71b8f5dfed67f0"} Dec 01 10:39:24 crc kubenswrapper[4933]: I1201 10:39:24.897995 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xkb84"] Dec 01 10:39:24 crc kubenswrapper[4933]: I1201 10:39:24.903673 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:24 crc kubenswrapper[4933]: I1201 10:39:24.915746 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xkb84"] Dec 01 10:39:24 crc kubenswrapper[4933]: I1201 10:39:24.991991 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff80ad71-9253-4397-80d9-13758b2a8aa3-utilities\") pod \"certified-operators-xkb84\" (UID: \"ff80ad71-9253-4397-80d9-13758b2a8aa3\") " pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:24 crc kubenswrapper[4933]: I1201 10:39:24.992321 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72mgx\" (UniqueName: \"kubernetes.io/projected/ff80ad71-9253-4397-80d9-13758b2a8aa3-kube-api-access-72mgx\") pod \"certified-operators-xkb84\" (UID: \"ff80ad71-9253-4397-80d9-13758b2a8aa3\") " pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:24 crc kubenswrapper[4933]: I1201 10:39:24.992388 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff80ad71-9253-4397-80d9-13758b2a8aa3-catalog-content\") pod \"certified-operators-xkb84\" (UID: \"ff80ad71-9253-4397-80d9-13758b2a8aa3\") " pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:25 crc kubenswrapper[4933]: I1201 10:39:25.094275 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72mgx\" (UniqueName: \"kubernetes.io/projected/ff80ad71-9253-4397-80d9-13758b2a8aa3-kube-api-access-72mgx\") pod \"certified-operators-xkb84\" (UID: \"ff80ad71-9253-4397-80d9-13758b2a8aa3\") " pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:25 crc kubenswrapper[4933]: I1201 10:39:25.094365 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff80ad71-9253-4397-80d9-13758b2a8aa3-catalog-content\") pod \"certified-operators-xkb84\" (UID: \"ff80ad71-9253-4397-80d9-13758b2a8aa3\") " pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:25 crc kubenswrapper[4933]: I1201 10:39:25.094464 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff80ad71-9253-4397-80d9-13758b2a8aa3-utilities\") pod \"certified-operators-xkb84\" (UID: \"ff80ad71-9253-4397-80d9-13758b2a8aa3\") " pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:25 crc kubenswrapper[4933]: I1201 10:39:25.094997 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff80ad71-9253-4397-80d9-13758b2a8aa3-catalog-content\") pod \"certified-operators-xkb84\" (UID: \"ff80ad71-9253-4397-80d9-13758b2a8aa3\") " pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:25 crc kubenswrapper[4933]: I1201 10:39:25.095103 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff80ad71-9253-4397-80d9-13758b2a8aa3-utilities\") pod \"certified-operators-xkb84\" (UID: \"ff80ad71-9253-4397-80d9-13758b2a8aa3\") " pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:25 crc kubenswrapper[4933]: I1201 10:39:25.126232 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72mgx\" (UniqueName: \"kubernetes.io/projected/ff80ad71-9253-4397-80d9-13758b2a8aa3-kube-api-access-72mgx\") pod \"certified-operators-xkb84\" (UID: \"ff80ad71-9253-4397-80d9-13758b2a8aa3\") " pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:25 crc kubenswrapper[4933]: I1201 10:39:25.131748 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqz4f" event={"ID":"c00c1834-23c4-4f9f-af4a-6a7611f45800","Type":"ContainerStarted","Data":"65b696030b82a5255a07d966e27434a2b0f81256183ca065ad2d983e8f2b79ff"} Dec 01 10:39:25 crc kubenswrapper[4933]: I1201 10:39:25.163367 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mqz4f" podStartSLOduration=2.56172231 podStartE2EDuration="4.163336295s" podCreationTimestamp="2025-12-01 10:39:21 +0000 UTC" firstStartedPulling="2025-12-01 10:39:23.102483934 +0000 UTC m=+4053.744207549" lastFinishedPulling="2025-12-01 10:39:24.704097909 +0000 UTC m=+4055.345821534" observedRunningTime="2025-12-01 10:39:25.151502142 +0000 UTC m=+4055.793225767" watchObservedRunningTime="2025-12-01 10:39:25.163336295 +0000 UTC m=+4055.805059910" Dec 01 10:39:25 crc kubenswrapper[4933]: I1201 10:39:25.266562 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:25 crc kubenswrapper[4933]: I1201 10:39:25.840244 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xkb84"] Dec 01 10:39:26 crc kubenswrapper[4933]: I1201 10:39:26.146640 4933 generic.go:334] "Generic (PLEG): container finished" podID="ff80ad71-9253-4397-80d9-13758b2a8aa3" containerID="877bf88e1ffe66bd1130ee61908320bef8f2d4d8e5646f4d8ea68df918abfdae" exitCode=0 Dec 01 10:39:26 crc kubenswrapper[4933]: I1201 10:39:26.146763 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkb84" event={"ID":"ff80ad71-9253-4397-80d9-13758b2a8aa3","Type":"ContainerDied","Data":"877bf88e1ffe66bd1130ee61908320bef8f2d4d8e5646f4d8ea68df918abfdae"} Dec 01 10:39:26 crc kubenswrapper[4933]: I1201 10:39:26.146828 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkb84" event={"ID":"ff80ad71-9253-4397-80d9-13758b2a8aa3","Type":"ContainerStarted","Data":"45a1f8bda18d10e4d385616a70ca3289883f5de89224dc6e3879d9f9a9b9d825"} Dec 01 10:39:27 crc kubenswrapper[4933]: I1201 10:39:27.164667 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkb84" event={"ID":"ff80ad71-9253-4397-80d9-13758b2a8aa3","Type":"ContainerStarted","Data":"a8db9be986faf4fc87346304c2de1476cc531c00d404c0d1c042f6c08a178344"} Dec 01 10:39:28 crc kubenswrapper[4933]: I1201 10:39:28.177187 4933 generic.go:334] "Generic (PLEG): container finished" podID="ff80ad71-9253-4397-80d9-13758b2a8aa3" containerID="a8db9be986faf4fc87346304c2de1476cc531c00d404c0d1c042f6c08a178344" exitCode=0 Dec 01 10:39:28 crc kubenswrapper[4933]: I1201 10:39:28.177282 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkb84" event={"ID":"ff80ad71-9253-4397-80d9-13758b2a8aa3","Type":"ContainerDied","Data":"a8db9be986faf4fc87346304c2de1476cc531c00d404c0d1c042f6c08a178344"} Dec 01 10:39:29 crc kubenswrapper[4933]: I1201 10:39:29.188920 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkb84" event={"ID":"ff80ad71-9253-4397-80d9-13758b2a8aa3","Type":"ContainerStarted","Data":"00f889dda18271745e235aa4e07f81c0c2aa7280c9152f18e444049ba3aeb75d"} Dec 01 10:39:29 crc kubenswrapper[4933]: I1201 10:39:29.218231 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xkb84" podStartSLOduration=2.724260596 podStartE2EDuration="5.218208557s" podCreationTimestamp="2025-12-01 10:39:24 +0000 UTC" firstStartedPulling="2025-12-01 10:39:26.14918657 +0000 UTC m=+4056.790910195" lastFinishedPulling="2025-12-01 10:39:28.643134541 +0000 UTC m=+4059.284858156" observedRunningTime="2025-12-01 10:39:29.208298202 +0000 UTC m=+4059.850021827" watchObservedRunningTime="2025-12-01 10:39:29.218208557 +0000 UTC m=+4059.859932172" Dec 01 10:39:32 crc kubenswrapper[4933]: I1201 10:39:32.048804 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:32 crc kubenswrapper[4933]: I1201 10:39:32.049364 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:32 crc kubenswrapper[4933]: I1201 10:39:32.116070 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:32 crc kubenswrapper[4933]: I1201 10:39:32.282613 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:32 crc kubenswrapper[4933]: I1201 10:39:32.457624 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqz4f"] Dec 01 10:39:34 crc kubenswrapper[4933]: I1201 10:39:34.253822 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mqz4f" podUID="c00c1834-23c4-4f9f-af4a-6a7611f45800" containerName="registry-server" containerID="cri-o://65b696030b82a5255a07d966e27434a2b0f81256183ca065ad2d983e8f2b79ff" gracePeriod=2 Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.264731 4933 generic.go:334] "Generic (PLEG): container finished" podID="c00c1834-23c4-4f9f-af4a-6a7611f45800" containerID="65b696030b82a5255a07d966e27434a2b0f81256183ca065ad2d983e8f2b79ff" exitCode=0 Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.264784 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqz4f" event={"ID":"c00c1834-23c4-4f9f-af4a-6a7611f45800","Type":"ContainerDied","Data":"65b696030b82a5255a07d966e27434a2b0f81256183ca065ad2d983e8f2b79ff"} Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.267033 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.267075 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.323150 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.752799 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.849438 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c1834-23c4-4f9f-af4a-6a7611f45800-catalog-content\") pod \"c00c1834-23c4-4f9f-af4a-6a7611f45800\" (UID: \"c00c1834-23c4-4f9f-af4a-6a7611f45800\") " Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.850044 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwtns\" (UniqueName: \"kubernetes.io/projected/c00c1834-23c4-4f9f-af4a-6a7611f45800-kube-api-access-mwtns\") pod \"c00c1834-23c4-4f9f-af4a-6a7611f45800\" (UID: \"c00c1834-23c4-4f9f-af4a-6a7611f45800\") " Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.850334 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c1834-23c4-4f9f-af4a-6a7611f45800-utilities\") pod \"c00c1834-23c4-4f9f-af4a-6a7611f45800\" (UID: \"c00c1834-23c4-4f9f-af4a-6a7611f45800\") " Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.851672 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00c1834-23c4-4f9f-af4a-6a7611f45800-utilities" (OuterVolumeSpecName: "utilities") pod "c00c1834-23c4-4f9f-af4a-6a7611f45800" (UID: "c00c1834-23c4-4f9f-af4a-6a7611f45800"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.916832 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00c1834-23c4-4f9f-af4a-6a7611f45800-kube-api-access-mwtns" (OuterVolumeSpecName: "kube-api-access-mwtns") pod "c00c1834-23c4-4f9f-af4a-6a7611f45800" (UID: "c00c1834-23c4-4f9f-af4a-6a7611f45800"). InnerVolumeSpecName "kube-api-access-mwtns". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.920971 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00c1834-23c4-4f9f-af4a-6a7611f45800-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c00c1834-23c4-4f9f-af4a-6a7611f45800" (UID: "c00c1834-23c4-4f9f-af4a-6a7611f45800"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.952854 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c1834-23c4-4f9f-af4a-6a7611f45800-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.952897 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c1834-23c4-4f9f-af4a-6a7611f45800-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:39:35 crc kubenswrapper[4933]: I1201 10:39:35.952911 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwtns\" (UniqueName: \"kubernetes.io/projected/c00c1834-23c4-4f9f-af4a-6a7611f45800-kube-api-access-mwtns\") on node \"crc\" DevicePath \"\"" Dec 01 10:39:36 crc kubenswrapper[4933]: I1201 10:39:36.280317 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqz4f" Dec 01 10:39:36 crc kubenswrapper[4933]: I1201 10:39:36.280449 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqz4f" event={"ID":"c00c1834-23c4-4f9f-af4a-6a7611f45800","Type":"ContainerDied","Data":"1670690c7ad0d3cafc0cac2ec949ca640949a30d840052ffb4d940f369472419"} Dec 01 10:39:36 crc kubenswrapper[4933]: I1201 10:39:36.280507 4933 scope.go:117] "RemoveContainer" containerID="65b696030b82a5255a07d966e27434a2b0f81256183ca065ad2d983e8f2b79ff" Dec 01 10:39:36 crc kubenswrapper[4933]: I1201 10:39:36.323175 4933 scope.go:117] "RemoveContainer" containerID="7db76d1b4f3bf0ec15487417d706514c96c22e250746100ada71b8f5dfed67f0" Dec 01 10:39:36 crc kubenswrapper[4933]: I1201 10:39:36.335501 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqz4f"] Dec 01 10:39:36 crc kubenswrapper[4933]: I1201 10:39:36.346770 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:36 crc kubenswrapper[4933]: I1201 10:39:36.352071 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqz4f"] Dec 01 10:39:36 crc kubenswrapper[4933]: I1201 10:39:36.352836 4933 scope.go:117] "RemoveContainer" containerID="acad68f61bd5b0054d0da3ecb773bac322cbe0ee4fb978ecd3e33883961d9532" Dec 01 10:39:36 crc kubenswrapper[4933]: I1201 10:39:36.667887 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:39:36 crc kubenswrapper[4933]: E1201 10:39:36.668627 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:39:37 crc kubenswrapper[4933]: I1201 10:39:37.685001 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c00c1834-23c4-4f9f-af4a-6a7611f45800" path="/var/lib/kubelet/pods/c00c1834-23c4-4f9f-af4a-6a7611f45800/volumes" Dec 01 10:39:37 crc kubenswrapper[4933]: I1201 10:39:37.859763 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xkb84"] Dec 01 10:39:38 crc kubenswrapper[4933]: I1201 10:39:38.300769 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xkb84" podUID="ff80ad71-9253-4397-80d9-13758b2a8aa3" containerName="registry-server" containerID="cri-o://00f889dda18271745e235aa4e07f81c0c2aa7280c9152f18e444049ba3aeb75d" gracePeriod=2 Dec 01 10:39:38 crc kubenswrapper[4933]: I1201 10:39:38.837787 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:38 crc kubenswrapper[4933]: I1201 10:39:38.923907 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff80ad71-9253-4397-80d9-13758b2a8aa3-utilities\") pod \"ff80ad71-9253-4397-80d9-13758b2a8aa3\" (UID: \"ff80ad71-9253-4397-80d9-13758b2a8aa3\") " Dec 01 10:39:38 crc kubenswrapper[4933]: I1201 10:39:38.924106 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72mgx\" (UniqueName: \"kubernetes.io/projected/ff80ad71-9253-4397-80d9-13758b2a8aa3-kube-api-access-72mgx\") pod \"ff80ad71-9253-4397-80d9-13758b2a8aa3\" (UID: \"ff80ad71-9253-4397-80d9-13758b2a8aa3\") " Dec 01 10:39:38 crc kubenswrapper[4933]: I1201 10:39:38.924233 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff80ad71-9253-4397-80d9-13758b2a8aa3-catalog-content\") pod \"ff80ad71-9253-4397-80d9-13758b2a8aa3\" (UID: \"ff80ad71-9253-4397-80d9-13758b2a8aa3\") " Dec 01 10:39:38 crc kubenswrapper[4933]: I1201 10:39:38.925108 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff80ad71-9253-4397-80d9-13758b2a8aa3-utilities" (OuterVolumeSpecName: "utilities") pod "ff80ad71-9253-4397-80d9-13758b2a8aa3" (UID: "ff80ad71-9253-4397-80d9-13758b2a8aa3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:39:38 crc kubenswrapper[4933]: I1201 10:39:38.931621 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff80ad71-9253-4397-80d9-13758b2a8aa3-kube-api-access-72mgx" (OuterVolumeSpecName: "kube-api-access-72mgx") pod "ff80ad71-9253-4397-80d9-13758b2a8aa3" (UID: "ff80ad71-9253-4397-80d9-13758b2a8aa3"). InnerVolumeSpecName "kube-api-access-72mgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:39:38 crc kubenswrapper[4933]: I1201 10:39:38.992294 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff80ad71-9253-4397-80d9-13758b2a8aa3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff80ad71-9253-4397-80d9-13758b2a8aa3" (UID: "ff80ad71-9253-4397-80d9-13758b2a8aa3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.026899 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff80ad71-9253-4397-80d9-13758b2a8aa3-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.026948 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72mgx\" (UniqueName: \"kubernetes.io/projected/ff80ad71-9253-4397-80d9-13758b2a8aa3-kube-api-access-72mgx\") on node \"crc\" DevicePath \"\"" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.026961 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff80ad71-9253-4397-80d9-13758b2a8aa3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.312238 4933 generic.go:334] "Generic (PLEG): container finished" podID="ff80ad71-9253-4397-80d9-13758b2a8aa3" containerID="00f889dda18271745e235aa4e07f81c0c2aa7280c9152f18e444049ba3aeb75d" exitCode=0 Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.312355 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkb84" event={"ID":"ff80ad71-9253-4397-80d9-13758b2a8aa3","Type":"ContainerDied","Data":"00f889dda18271745e235aa4e07f81c0c2aa7280c9152f18e444049ba3aeb75d"} Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.312386 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkb84" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.312414 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkb84" event={"ID":"ff80ad71-9253-4397-80d9-13758b2a8aa3","Type":"ContainerDied","Data":"45a1f8bda18d10e4d385616a70ca3289883f5de89224dc6e3879d9f9a9b9d825"} Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.312436 4933 scope.go:117] "RemoveContainer" containerID="00f889dda18271745e235aa4e07f81c0c2aa7280c9152f18e444049ba3aeb75d" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.338651 4933 scope.go:117] "RemoveContainer" containerID="a8db9be986faf4fc87346304c2de1476cc531c00d404c0d1c042f6c08a178344" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.355153 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xkb84"] Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.364533 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xkb84"] Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.381992 4933 scope.go:117] "RemoveContainer" containerID="877bf88e1ffe66bd1130ee61908320bef8f2d4d8e5646f4d8ea68df918abfdae" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.417921 4933 scope.go:117] "RemoveContainer" containerID="00f889dda18271745e235aa4e07f81c0c2aa7280c9152f18e444049ba3aeb75d" Dec 01 10:39:39 crc kubenswrapper[4933]: E1201 10:39:39.419600 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f889dda18271745e235aa4e07f81c0c2aa7280c9152f18e444049ba3aeb75d\": container with ID starting with 00f889dda18271745e235aa4e07f81c0c2aa7280c9152f18e444049ba3aeb75d not found: ID does not exist" containerID="00f889dda18271745e235aa4e07f81c0c2aa7280c9152f18e444049ba3aeb75d" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.419645 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f889dda18271745e235aa4e07f81c0c2aa7280c9152f18e444049ba3aeb75d"} err="failed to get container status \"00f889dda18271745e235aa4e07f81c0c2aa7280c9152f18e444049ba3aeb75d\": rpc error: code = NotFound desc = could not find container \"00f889dda18271745e235aa4e07f81c0c2aa7280c9152f18e444049ba3aeb75d\": container with ID starting with 00f889dda18271745e235aa4e07f81c0c2aa7280c9152f18e444049ba3aeb75d not found: ID does not exist" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.419675 4933 scope.go:117] "RemoveContainer" containerID="a8db9be986faf4fc87346304c2de1476cc531c00d404c0d1c042f6c08a178344" Dec 01 10:39:39 crc kubenswrapper[4933]: E1201 10:39:39.420246 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8db9be986faf4fc87346304c2de1476cc531c00d404c0d1c042f6c08a178344\": container with ID starting with a8db9be986faf4fc87346304c2de1476cc531c00d404c0d1c042f6c08a178344 not found: ID does not exist" containerID="a8db9be986faf4fc87346304c2de1476cc531c00d404c0d1c042f6c08a178344" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.420332 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8db9be986faf4fc87346304c2de1476cc531c00d404c0d1c042f6c08a178344"} err="failed to get container status \"a8db9be986faf4fc87346304c2de1476cc531c00d404c0d1c042f6c08a178344\": rpc error: code = NotFound desc = could not find container \"a8db9be986faf4fc87346304c2de1476cc531c00d404c0d1c042f6c08a178344\": container with ID starting with a8db9be986faf4fc87346304c2de1476cc531c00d404c0d1c042f6c08a178344 not found: ID does not exist" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.420372 4933 scope.go:117] "RemoveContainer" containerID="877bf88e1ffe66bd1130ee61908320bef8f2d4d8e5646f4d8ea68df918abfdae" Dec 01 10:39:39 crc kubenswrapper[4933]: E1201 10:39:39.420711 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"877bf88e1ffe66bd1130ee61908320bef8f2d4d8e5646f4d8ea68df918abfdae\": container with ID starting with 877bf88e1ffe66bd1130ee61908320bef8f2d4d8e5646f4d8ea68df918abfdae not found: ID does not exist" containerID="877bf88e1ffe66bd1130ee61908320bef8f2d4d8e5646f4d8ea68df918abfdae" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.420744 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"877bf88e1ffe66bd1130ee61908320bef8f2d4d8e5646f4d8ea68df918abfdae"} err="failed to get container status \"877bf88e1ffe66bd1130ee61908320bef8f2d4d8e5646f4d8ea68df918abfdae\": rpc error: code = NotFound desc = could not find container \"877bf88e1ffe66bd1130ee61908320bef8f2d4d8e5646f4d8ea68df918abfdae\": container with ID starting with 877bf88e1ffe66bd1130ee61908320bef8f2d4d8e5646f4d8ea68df918abfdae not found: ID does not exist" Dec 01 10:39:39 crc kubenswrapper[4933]: I1201 10:39:39.679093 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff80ad71-9253-4397-80d9-13758b2a8aa3" path="/var/lib/kubelet/pods/ff80ad71-9253-4397-80d9-13758b2a8aa3/volumes" Dec 01 10:39:51 crc kubenswrapper[4933]: I1201 10:39:51.668889 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:39:51 crc kubenswrapper[4933]: E1201 10:39:51.669898 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:39:59 crc kubenswrapper[4933]: I1201 10:39:59.883879 4933 scope.go:117] "RemoveContainer" containerID="5f74f93a4885eea6a73b28cc6771162156ce1803227cfc53cc657e183bc87b1e" Dec 01 10:40:06 crc kubenswrapper[4933]: I1201 10:40:06.668073 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:40:06 crc kubenswrapper[4933]: E1201 10:40:06.668749 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:40:20 crc kubenswrapper[4933]: I1201 10:40:20.668092 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:40:20 crc kubenswrapper[4933]: E1201 10:40:20.669111 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:40:33 crc kubenswrapper[4933]: I1201 10:40:33.667944 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:40:33 crc kubenswrapper[4933]: E1201 10:40:33.668714 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:40:47 crc kubenswrapper[4933]: I1201 10:40:47.668136 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:40:49 crc kubenswrapper[4933]: I1201 10:40:49.032675 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"5980ab50a1619ce76cb421b80524ebb405bf0c56dae61e64a1b53628b706bcb0"} Dec 01 10:41:00 crc kubenswrapper[4933]: I1201 10:41:00.090627 4933 scope.go:117] "RemoveContainer" containerID="02316fa9fe2d4ce07f51746f80c5a6ccfb7f0f9c031aaf826f8e72385dde5e39" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.097733 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-smjwx/must-gather-qsc4x"] Dec 01 10:41:35 crc kubenswrapper[4933]: E1201 10:41:35.099367 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff80ad71-9253-4397-80d9-13758b2a8aa3" containerName="registry-server" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.099394 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff80ad71-9253-4397-80d9-13758b2a8aa3" containerName="registry-server" Dec 01 10:41:35 crc kubenswrapper[4933]: E1201 10:41:35.099407 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00c1834-23c4-4f9f-af4a-6a7611f45800" containerName="extract-content" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.099415 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00c1834-23c4-4f9f-af4a-6a7611f45800" containerName="extract-content" Dec 01 10:41:35 crc kubenswrapper[4933]: E1201 10:41:35.099442 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff80ad71-9253-4397-80d9-13758b2a8aa3" containerName="extract-utilities" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.099454 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff80ad71-9253-4397-80d9-13758b2a8aa3" containerName="extract-utilities" Dec 01 10:41:35 crc kubenswrapper[4933]: E1201 10:41:35.099470 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00c1834-23c4-4f9f-af4a-6a7611f45800" containerName="extract-utilities" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.099478 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00c1834-23c4-4f9f-af4a-6a7611f45800" containerName="extract-utilities" Dec 01 10:41:35 crc kubenswrapper[4933]: E1201 10:41:35.099489 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff80ad71-9253-4397-80d9-13758b2a8aa3" containerName="extract-content" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.099496 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff80ad71-9253-4397-80d9-13758b2a8aa3" containerName="extract-content" Dec 01 10:41:35 crc kubenswrapper[4933]: E1201 10:41:35.099543 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00c1834-23c4-4f9f-af4a-6a7611f45800" containerName="registry-server" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.099552 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00c1834-23c4-4f9f-af4a-6a7611f45800" containerName="registry-server" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.099790 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="c00c1834-23c4-4f9f-af4a-6a7611f45800" containerName="registry-server" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.099816 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff80ad71-9253-4397-80d9-13758b2a8aa3" containerName="registry-server" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.101260 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/must-gather-qsc4x" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.110037 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-smjwx"/"kube-root-ca.crt" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.112145 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-smjwx"/"openshift-service-ca.crt" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.115372 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-smjwx"/"default-dockercfg-9kxtj" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.124128 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-smjwx/must-gather-qsc4x"] Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.211065 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7xkf\" (UniqueName: \"kubernetes.io/projected/116ab7e1-9a57-4640-8e97-a3e2140b402b-kube-api-access-c7xkf\") pod \"must-gather-qsc4x\" (UID: \"116ab7e1-9a57-4640-8e97-a3e2140b402b\") " pod="openshift-must-gather-smjwx/must-gather-qsc4x" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.211225 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/116ab7e1-9a57-4640-8e97-a3e2140b402b-must-gather-output\") pod \"must-gather-qsc4x\" (UID: \"116ab7e1-9a57-4640-8e97-a3e2140b402b\") " pod="openshift-must-gather-smjwx/must-gather-qsc4x" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.313033 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/116ab7e1-9a57-4640-8e97-a3e2140b402b-must-gather-output\") pod \"must-gather-qsc4x\" (UID: \"116ab7e1-9a57-4640-8e97-a3e2140b402b\") " pod="openshift-must-gather-smjwx/must-gather-qsc4x" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.313184 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7xkf\" (UniqueName: \"kubernetes.io/projected/116ab7e1-9a57-4640-8e97-a3e2140b402b-kube-api-access-c7xkf\") pod \"must-gather-qsc4x\" (UID: \"116ab7e1-9a57-4640-8e97-a3e2140b402b\") " pod="openshift-must-gather-smjwx/must-gather-qsc4x" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.313663 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/116ab7e1-9a57-4640-8e97-a3e2140b402b-must-gather-output\") pod \"must-gather-qsc4x\" (UID: \"116ab7e1-9a57-4640-8e97-a3e2140b402b\") " pod="openshift-must-gather-smjwx/must-gather-qsc4x" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.346136 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7xkf\" (UniqueName: \"kubernetes.io/projected/116ab7e1-9a57-4640-8e97-a3e2140b402b-kube-api-access-c7xkf\") pod \"must-gather-qsc4x\" (UID: \"116ab7e1-9a57-4640-8e97-a3e2140b402b\") " pod="openshift-must-gather-smjwx/must-gather-qsc4x" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.447507 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/must-gather-qsc4x" Dec 01 10:41:35 crc kubenswrapper[4933]: I1201 10:41:35.949213 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-smjwx/must-gather-qsc4x"] Dec 01 10:41:36 crc kubenswrapper[4933]: I1201 10:41:36.521291 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smjwx/must-gather-qsc4x" event={"ID":"116ab7e1-9a57-4640-8e97-a3e2140b402b","Type":"ContainerStarted","Data":"908aba6518121c1380ab970c27834f3f53682dcde6f80656ede298b599973068"} Dec 01 10:41:36 crc kubenswrapper[4933]: I1201 10:41:36.522005 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smjwx/must-gather-qsc4x" event={"ID":"116ab7e1-9a57-4640-8e97-a3e2140b402b","Type":"ContainerStarted","Data":"b9eea1b8dbf7814e7da32a7572f078b8aaacb223958f41103e3d374e4856c56a"} Dec 01 10:41:36 crc kubenswrapper[4933]: I1201 10:41:36.522027 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smjwx/must-gather-qsc4x" event={"ID":"116ab7e1-9a57-4640-8e97-a3e2140b402b","Type":"ContainerStarted","Data":"dc4423ee430b82a0daf244e3a4197f6eea8887998421aea7241fc389b76c0e04"} Dec 01 10:41:36 crc kubenswrapper[4933]: I1201 10:41:36.543960 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-smjwx/must-gather-qsc4x" podStartSLOduration=1.5439364009999998 podStartE2EDuration="1.543936401s" podCreationTimestamp="2025-12-01 10:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 10:41:36.54063988 +0000 UTC m=+4187.182363535" watchObservedRunningTime="2025-12-01 10:41:36.543936401 +0000 UTC m=+4187.185660016" Dec 01 10:41:40 crc kubenswrapper[4933]: I1201 10:41:40.146059 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-smjwx/crc-debug-wnxmx"] Dec 01 10:41:40 crc kubenswrapper[4933]: I1201 10:41:40.148188 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/crc-debug-wnxmx" Dec 01 10:41:40 crc kubenswrapper[4933]: I1201 10:41:40.273666 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwwqj\" (UniqueName: \"kubernetes.io/projected/1ce2ab89-5418-4fa5-b96e-922a1d8f5c77-kube-api-access-gwwqj\") pod \"crc-debug-wnxmx\" (UID: \"1ce2ab89-5418-4fa5-b96e-922a1d8f5c77\") " pod="openshift-must-gather-smjwx/crc-debug-wnxmx" Dec 01 10:41:40 crc kubenswrapper[4933]: I1201 10:41:40.274090 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ce2ab89-5418-4fa5-b96e-922a1d8f5c77-host\") pod \"crc-debug-wnxmx\" (UID: \"1ce2ab89-5418-4fa5-b96e-922a1d8f5c77\") " pod="openshift-must-gather-smjwx/crc-debug-wnxmx" Dec 01 10:41:40 crc kubenswrapper[4933]: I1201 10:41:40.377172 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ce2ab89-5418-4fa5-b96e-922a1d8f5c77-host\") pod \"crc-debug-wnxmx\" (UID: \"1ce2ab89-5418-4fa5-b96e-922a1d8f5c77\") " pod="openshift-must-gather-smjwx/crc-debug-wnxmx" Dec 01 10:41:40 crc kubenswrapper[4933]: I1201 10:41:40.377396 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwwqj\" (UniqueName: \"kubernetes.io/projected/1ce2ab89-5418-4fa5-b96e-922a1d8f5c77-kube-api-access-gwwqj\") pod \"crc-debug-wnxmx\" (UID: \"1ce2ab89-5418-4fa5-b96e-922a1d8f5c77\") " pod="openshift-must-gather-smjwx/crc-debug-wnxmx" Dec 01 10:41:40 crc kubenswrapper[4933]: I1201 10:41:40.378145 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ce2ab89-5418-4fa5-b96e-922a1d8f5c77-host\") pod \"crc-debug-wnxmx\" (UID: \"1ce2ab89-5418-4fa5-b96e-922a1d8f5c77\") " pod="openshift-must-gather-smjwx/crc-debug-wnxmx" Dec 01 10:41:40 crc kubenswrapper[4933]: I1201 10:41:40.398542 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwwqj\" (UniqueName: \"kubernetes.io/projected/1ce2ab89-5418-4fa5-b96e-922a1d8f5c77-kube-api-access-gwwqj\") pod \"crc-debug-wnxmx\" (UID: \"1ce2ab89-5418-4fa5-b96e-922a1d8f5c77\") " pod="openshift-must-gather-smjwx/crc-debug-wnxmx" Dec 01 10:41:40 crc kubenswrapper[4933]: I1201 10:41:40.478071 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/crc-debug-wnxmx" Dec 01 10:41:40 crc kubenswrapper[4933]: I1201 10:41:40.587382 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smjwx/crc-debug-wnxmx" event={"ID":"1ce2ab89-5418-4fa5-b96e-922a1d8f5c77","Type":"ContainerStarted","Data":"bafd995bda7e9dece7c6f1902f455df6ba14e98550ae30e3e1c9c1ec42ad2de7"} Dec 01 10:41:41 crc kubenswrapper[4933]: I1201 10:41:41.597420 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smjwx/crc-debug-wnxmx" event={"ID":"1ce2ab89-5418-4fa5-b96e-922a1d8f5c77","Type":"ContainerStarted","Data":"71aef67b9f4ab45810d8bc531974135e99c95a3a5b042403fd6ee87cfa9eac64"} Dec 01 10:41:41 crc kubenswrapper[4933]: I1201 10:41:41.620675 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-smjwx/crc-debug-wnxmx" podStartSLOduration=1.620643716 podStartE2EDuration="1.620643716s" podCreationTimestamp="2025-12-01 10:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 10:41:41.609796839 +0000 UTC m=+4192.251520454" watchObservedRunningTime="2025-12-01 10:41:41.620643716 +0000 UTC m=+4192.262367341" Dec 01 10:42:16 crc kubenswrapper[4933]: I1201 10:42:16.998502 4933 generic.go:334] "Generic (PLEG): container finished" podID="1ce2ab89-5418-4fa5-b96e-922a1d8f5c77" containerID="71aef67b9f4ab45810d8bc531974135e99c95a3a5b042403fd6ee87cfa9eac64" exitCode=0 Dec 01 10:42:16 crc kubenswrapper[4933]: I1201 10:42:16.998599 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smjwx/crc-debug-wnxmx" event={"ID":"1ce2ab89-5418-4fa5-b96e-922a1d8f5c77","Type":"ContainerDied","Data":"71aef67b9f4ab45810d8bc531974135e99c95a3a5b042403fd6ee87cfa9eac64"} Dec 01 10:42:18 crc kubenswrapper[4933]: I1201 10:42:18.164832 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/crc-debug-wnxmx" Dec 01 10:42:18 crc kubenswrapper[4933]: I1201 10:42:18.175745 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ce2ab89-5418-4fa5-b96e-922a1d8f5c77-host\") pod \"1ce2ab89-5418-4fa5-b96e-922a1d8f5c77\" (UID: \"1ce2ab89-5418-4fa5-b96e-922a1d8f5c77\") " Dec 01 10:42:18 crc kubenswrapper[4933]: I1201 10:42:18.175847 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwwqj\" (UniqueName: \"kubernetes.io/projected/1ce2ab89-5418-4fa5-b96e-922a1d8f5c77-kube-api-access-gwwqj\") pod \"1ce2ab89-5418-4fa5-b96e-922a1d8f5c77\" (UID: \"1ce2ab89-5418-4fa5-b96e-922a1d8f5c77\") " Dec 01 10:42:18 crc kubenswrapper[4933]: I1201 10:42:18.175999 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ce2ab89-5418-4fa5-b96e-922a1d8f5c77-host" (OuterVolumeSpecName: "host") pod "1ce2ab89-5418-4fa5-b96e-922a1d8f5c77" (UID: "1ce2ab89-5418-4fa5-b96e-922a1d8f5c77"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 10:42:18 crc kubenswrapper[4933]: I1201 10:42:18.176964 4933 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ce2ab89-5418-4fa5-b96e-922a1d8f5c77-host\") on node \"crc\" DevicePath \"\"" Dec 01 10:42:18 crc kubenswrapper[4933]: I1201 10:42:18.183874 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ce2ab89-5418-4fa5-b96e-922a1d8f5c77-kube-api-access-gwwqj" (OuterVolumeSpecName: "kube-api-access-gwwqj") pod "1ce2ab89-5418-4fa5-b96e-922a1d8f5c77" (UID: "1ce2ab89-5418-4fa5-b96e-922a1d8f5c77"). InnerVolumeSpecName "kube-api-access-gwwqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:42:18 crc kubenswrapper[4933]: I1201 10:42:18.204395 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-smjwx/crc-debug-wnxmx"] Dec 01 10:42:18 crc kubenswrapper[4933]: I1201 10:42:18.217109 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-smjwx/crc-debug-wnxmx"] Dec 01 10:42:18 crc kubenswrapper[4933]: I1201 10:42:18.279821 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwwqj\" (UniqueName: \"kubernetes.io/projected/1ce2ab89-5418-4fa5-b96e-922a1d8f5c77-kube-api-access-gwwqj\") on node \"crc\" DevicePath \"\"" Dec 01 10:42:19 crc kubenswrapper[4933]: I1201 10:42:19.022227 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bafd995bda7e9dece7c6f1902f455df6ba14e98550ae30e3e1c9c1ec42ad2de7" Dec 01 10:42:19 crc kubenswrapper[4933]: I1201 10:42:19.022817 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/crc-debug-wnxmx" Dec 01 10:42:19 crc kubenswrapper[4933]: I1201 10:42:19.439358 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-smjwx/crc-debug-6cjck"] Dec 01 10:42:19 crc kubenswrapper[4933]: E1201 10:42:19.440080 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce2ab89-5418-4fa5-b96e-922a1d8f5c77" containerName="container-00" Dec 01 10:42:19 crc kubenswrapper[4933]: I1201 10:42:19.440094 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce2ab89-5418-4fa5-b96e-922a1d8f5c77" containerName="container-00" Dec 01 10:42:19 crc kubenswrapper[4933]: I1201 10:42:19.440340 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce2ab89-5418-4fa5-b96e-922a1d8f5c77" containerName="container-00" Dec 01 10:42:19 crc kubenswrapper[4933]: I1201 10:42:19.441034 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/crc-debug-6cjck" Dec 01 10:42:19 crc kubenswrapper[4933]: I1201 10:42:19.505125 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/942e858d-8afc-4e29-838d-d758fb791af8-host\") pod \"crc-debug-6cjck\" (UID: \"942e858d-8afc-4e29-838d-d758fb791af8\") " pod="openshift-must-gather-smjwx/crc-debug-6cjck" Dec 01 10:42:19 crc kubenswrapper[4933]: I1201 10:42:19.505226 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm7xq\" (UniqueName: \"kubernetes.io/projected/942e858d-8afc-4e29-838d-d758fb791af8-kube-api-access-nm7xq\") pod \"crc-debug-6cjck\" (UID: \"942e858d-8afc-4e29-838d-d758fb791af8\") " pod="openshift-must-gather-smjwx/crc-debug-6cjck" Dec 01 10:42:19 crc kubenswrapper[4933]: I1201 10:42:19.607087 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/942e858d-8afc-4e29-838d-d758fb791af8-host\") pod \"crc-debug-6cjck\" (UID: \"942e858d-8afc-4e29-838d-d758fb791af8\") " pod="openshift-must-gather-smjwx/crc-debug-6cjck" Dec 01 10:42:19 crc kubenswrapper[4933]: I1201 10:42:19.607190 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm7xq\" (UniqueName: \"kubernetes.io/projected/942e858d-8afc-4e29-838d-d758fb791af8-kube-api-access-nm7xq\") pod \"crc-debug-6cjck\" (UID: \"942e858d-8afc-4e29-838d-d758fb791af8\") " pod="openshift-must-gather-smjwx/crc-debug-6cjck" Dec 01 10:42:19 crc kubenswrapper[4933]: I1201 10:42:19.607291 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/942e858d-8afc-4e29-838d-d758fb791af8-host\") pod \"crc-debug-6cjck\" (UID: \"942e858d-8afc-4e29-838d-d758fb791af8\") " pod="openshift-must-gather-smjwx/crc-debug-6cjck" Dec 01 10:42:19 crc kubenswrapper[4933]: I1201 10:42:19.681897 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ce2ab89-5418-4fa5-b96e-922a1d8f5c77" path="/var/lib/kubelet/pods/1ce2ab89-5418-4fa5-b96e-922a1d8f5c77/volumes" Dec 01 10:42:19 crc kubenswrapper[4933]: I1201 10:42:19.967049 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm7xq\" (UniqueName: \"kubernetes.io/projected/942e858d-8afc-4e29-838d-d758fb791af8-kube-api-access-nm7xq\") pod \"crc-debug-6cjck\" (UID: \"942e858d-8afc-4e29-838d-d758fb791af8\") " pod="openshift-must-gather-smjwx/crc-debug-6cjck" Dec 01 10:42:20 crc kubenswrapper[4933]: I1201 10:42:20.060237 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/crc-debug-6cjck" Dec 01 10:42:21 crc kubenswrapper[4933]: I1201 10:42:21.041332 4933 generic.go:334] "Generic (PLEG): container finished" podID="942e858d-8afc-4e29-838d-d758fb791af8" containerID="b4ff7f46e40a016247a8b6519adfad0e4514888ca33abe5e107a5bfed4bb2122" exitCode=0 Dec 01 10:42:21 crc kubenswrapper[4933]: I1201 10:42:21.041437 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smjwx/crc-debug-6cjck" event={"ID":"942e858d-8afc-4e29-838d-d758fb791af8","Type":"ContainerDied","Data":"b4ff7f46e40a016247a8b6519adfad0e4514888ca33abe5e107a5bfed4bb2122"} Dec 01 10:42:21 crc kubenswrapper[4933]: I1201 10:42:21.041872 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smjwx/crc-debug-6cjck" event={"ID":"942e858d-8afc-4e29-838d-d758fb791af8","Type":"ContainerStarted","Data":"8f8bf1d4e6d29217c21ffdb73ae784bce8f93efa3996d7e5318800dca139416c"} Dec 01 10:42:21 crc kubenswrapper[4933]: I1201 10:42:21.478982 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-smjwx/crc-debug-6cjck"] Dec 01 10:42:21 crc kubenswrapper[4933]: I1201 10:42:21.489352 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-smjwx/crc-debug-6cjck"] Dec 01 10:42:22 crc kubenswrapper[4933]: I1201 10:42:22.295470 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/crc-debug-6cjck" Dec 01 10:42:22 crc kubenswrapper[4933]: I1201 10:42:22.461795 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm7xq\" (UniqueName: \"kubernetes.io/projected/942e858d-8afc-4e29-838d-d758fb791af8-kube-api-access-nm7xq\") pod \"942e858d-8afc-4e29-838d-d758fb791af8\" (UID: \"942e858d-8afc-4e29-838d-d758fb791af8\") " Dec 01 10:42:22 crc kubenswrapper[4933]: I1201 10:42:22.462289 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/942e858d-8afc-4e29-838d-d758fb791af8-host\") pod \"942e858d-8afc-4e29-838d-d758fb791af8\" (UID: \"942e858d-8afc-4e29-838d-d758fb791af8\") " Dec 01 10:42:22 crc kubenswrapper[4933]: I1201 10:42:22.462510 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/942e858d-8afc-4e29-838d-d758fb791af8-host" (OuterVolumeSpecName: "host") pod "942e858d-8afc-4e29-838d-d758fb791af8" (UID: "942e858d-8afc-4e29-838d-d758fb791af8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 10:42:22 crc kubenswrapper[4933]: I1201 10:42:22.462899 4933 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/942e858d-8afc-4e29-838d-d758fb791af8-host\") on node \"crc\" DevicePath \"\"" Dec 01 10:42:22 crc kubenswrapper[4933]: I1201 10:42:22.472929 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/942e858d-8afc-4e29-838d-d758fb791af8-kube-api-access-nm7xq" (OuterVolumeSpecName: "kube-api-access-nm7xq") pod "942e858d-8afc-4e29-838d-d758fb791af8" (UID: "942e858d-8afc-4e29-838d-d758fb791af8"). InnerVolumeSpecName "kube-api-access-nm7xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:42:22 crc kubenswrapper[4933]: I1201 10:42:22.565417 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm7xq\" (UniqueName: \"kubernetes.io/projected/942e858d-8afc-4e29-838d-d758fb791af8-kube-api-access-nm7xq\") on node \"crc\" DevicePath \"\"" Dec 01 10:42:22 crc kubenswrapper[4933]: I1201 10:42:22.693386 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-smjwx/crc-debug-khpm7"] Dec 01 10:42:22 crc kubenswrapper[4933]: E1201 10:42:22.694079 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="942e858d-8afc-4e29-838d-d758fb791af8" containerName="container-00" Dec 01 10:42:22 crc kubenswrapper[4933]: I1201 10:42:22.694165 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="942e858d-8afc-4e29-838d-d758fb791af8" containerName="container-00" Dec 01 10:42:22 crc kubenswrapper[4933]: I1201 10:42:22.694458 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="942e858d-8afc-4e29-838d-d758fb791af8" containerName="container-00" Dec 01 10:42:22 crc kubenswrapper[4933]: I1201 10:42:22.695177 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/crc-debug-khpm7" Dec 01 10:42:23 crc kubenswrapper[4933]: I1201 10:42:23.877281 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggljb\" (UniqueName: \"kubernetes.io/projected/27cd4e7e-dcd2-47a9-a4de-739274524be0-kube-api-access-ggljb\") pod \"crc-debug-khpm7\" (UID: \"27cd4e7e-dcd2-47a9-a4de-739274524be0\") " pod="openshift-must-gather-smjwx/crc-debug-khpm7" Dec 01 10:42:23 crc kubenswrapper[4933]: I1201 10:42:23.877852 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27cd4e7e-dcd2-47a9-a4de-739274524be0-host\") pod \"crc-debug-khpm7\" (UID: \"27cd4e7e-dcd2-47a9-a4de-739274524be0\") " pod="openshift-must-gather-smjwx/crc-debug-khpm7" Dec 01 10:42:23 crc kubenswrapper[4933]: I1201 10:42:23.923578 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/crc-debug-6cjck" Dec 01 10:42:23 crc kubenswrapper[4933]: I1201 10:42:23.936711 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="942e858d-8afc-4e29-838d-d758fb791af8" path="/var/lib/kubelet/pods/942e858d-8afc-4e29-838d-d758fb791af8/volumes" Dec 01 10:42:23 crc kubenswrapper[4933]: I1201 10:42:23.937510 4933 scope.go:117] "RemoveContainer" containerID="b4ff7f46e40a016247a8b6519adfad0e4514888ca33abe5e107a5bfed4bb2122" Dec 01 10:42:23 crc kubenswrapper[4933]: I1201 10:42:23.981295 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggljb\" (UniqueName: \"kubernetes.io/projected/27cd4e7e-dcd2-47a9-a4de-739274524be0-kube-api-access-ggljb\") pod \"crc-debug-khpm7\" (UID: \"27cd4e7e-dcd2-47a9-a4de-739274524be0\") " pod="openshift-must-gather-smjwx/crc-debug-khpm7" Dec 01 10:42:23 crc kubenswrapper[4933]: I1201 10:42:23.981566 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27cd4e7e-dcd2-47a9-a4de-739274524be0-host\") pod \"crc-debug-khpm7\" (UID: \"27cd4e7e-dcd2-47a9-a4de-739274524be0\") " pod="openshift-must-gather-smjwx/crc-debug-khpm7" Dec 01 10:42:23 crc kubenswrapper[4933]: I1201 10:42:23.983037 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27cd4e7e-dcd2-47a9-a4de-739274524be0-host\") pod \"crc-debug-khpm7\" (UID: \"27cd4e7e-dcd2-47a9-a4de-739274524be0\") " pod="openshift-must-gather-smjwx/crc-debug-khpm7" Dec 01 10:42:24 crc kubenswrapper[4933]: I1201 10:42:24.008115 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggljb\" (UniqueName: \"kubernetes.io/projected/27cd4e7e-dcd2-47a9-a4de-739274524be0-kube-api-access-ggljb\") pod \"crc-debug-khpm7\" (UID: \"27cd4e7e-dcd2-47a9-a4de-739274524be0\") " pod="openshift-must-gather-smjwx/crc-debug-khpm7" Dec 01 10:42:24 crc kubenswrapper[4933]: I1201 10:42:24.216563 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/crc-debug-khpm7" Dec 01 10:42:24 crc kubenswrapper[4933]: W1201 10:42:24.249501 4933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27cd4e7e_dcd2_47a9_a4de_739274524be0.slice/crio-d9bc33571bfce2a2ca11612f57116380a73bf7c8b5c722aa3fb7821f14fe1aa7 WatchSource:0}: Error finding container d9bc33571bfce2a2ca11612f57116380a73bf7c8b5c722aa3fb7821f14fe1aa7: Status 404 returned error can't find the container with id d9bc33571bfce2a2ca11612f57116380a73bf7c8b5c722aa3fb7821f14fe1aa7 Dec 01 10:42:24 crc kubenswrapper[4933]: I1201 10:42:24.934507 4933 generic.go:334] "Generic (PLEG): container finished" podID="27cd4e7e-dcd2-47a9-a4de-739274524be0" containerID="5d1123abc4c44b471ad2ff1a36b0afec910ea43618a0b779ffda01e8a8b78648" exitCode=0 Dec 01 10:42:24 crc kubenswrapper[4933]: I1201 10:42:24.934599 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smjwx/crc-debug-khpm7" event={"ID":"27cd4e7e-dcd2-47a9-a4de-739274524be0","Type":"ContainerDied","Data":"5d1123abc4c44b471ad2ff1a36b0afec910ea43618a0b779ffda01e8a8b78648"} Dec 01 10:42:24 crc kubenswrapper[4933]: I1201 10:42:24.935020 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smjwx/crc-debug-khpm7" event={"ID":"27cd4e7e-dcd2-47a9-a4de-739274524be0","Type":"ContainerStarted","Data":"d9bc33571bfce2a2ca11612f57116380a73bf7c8b5c722aa3fb7821f14fe1aa7"} Dec 01 10:42:25 crc kubenswrapper[4933]: I1201 10:42:25.005651 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-smjwx/crc-debug-khpm7"] Dec 01 10:42:25 crc kubenswrapper[4933]: I1201 10:42:25.019687 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-smjwx/crc-debug-khpm7"] Dec 01 10:42:26 crc kubenswrapper[4933]: I1201 10:42:26.053944 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/crc-debug-khpm7" Dec 01 10:42:26 crc kubenswrapper[4933]: I1201 10:42:26.228724 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggljb\" (UniqueName: \"kubernetes.io/projected/27cd4e7e-dcd2-47a9-a4de-739274524be0-kube-api-access-ggljb\") pod \"27cd4e7e-dcd2-47a9-a4de-739274524be0\" (UID: \"27cd4e7e-dcd2-47a9-a4de-739274524be0\") " Dec 01 10:42:26 crc kubenswrapper[4933]: I1201 10:42:26.228909 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27cd4e7e-dcd2-47a9-a4de-739274524be0-host\") pod \"27cd4e7e-dcd2-47a9-a4de-739274524be0\" (UID: \"27cd4e7e-dcd2-47a9-a4de-739274524be0\") " Dec 01 10:42:26 crc kubenswrapper[4933]: I1201 10:42:26.229119 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27cd4e7e-dcd2-47a9-a4de-739274524be0-host" (OuterVolumeSpecName: "host") pod "27cd4e7e-dcd2-47a9-a4de-739274524be0" (UID: "27cd4e7e-dcd2-47a9-a4de-739274524be0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 10:42:26 crc kubenswrapper[4933]: I1201 10:42:26.229608 4933 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27cd4e7e-dcd2-47a9-a4de-739274524be0-host\") on node \"crc\" DevicePath \"\"" Dec 01 10:42:26 crc kubenswrapper[4933]: I1201 10:42:26.235386 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27cd4e7e-dcd2-47a9-a4de-739274524be0-kube-api-access-ggljb" (OuterVolumeSpecName: "kube-api-access-ggljb") pod "27cd4e7e-dcd2-47a9-a4de-739274524be0" (UID: "27cd4e7e-dcd2-47a9-a4de-739274524be0"). InnerVolumeSpecName "kube-api-access-ggljb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:42:26 crc kubenswrapper[4933]: I1201 10:42:26.331745 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggljb\" (UniqueName: \"kubernetes.io/projected/27cd4e7e-dcd2-47a9-a4de-739274524be0-kube-api-access-ggljb\") on node \"crc\" DevicePath \"\"" Dec 01 10:42:26 crc kubenswrapper[4933]: I1201 10:42:26.954793 4933 scope.go:117] "RemoveContainer" containerID="5d1123abc4c44b471ad2ff1a36b0afec910ea43618a0b779ffda01e8a8b78648" Dec 01 10:42:26 crc kubenswrapper[4933]: I1201 10:42:26.955336 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/crc-debug-khpm7" Dec 01 10:42:27 crc kubenswrapper[4933]: I1201 10:42:27.686737 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27cd4e7e-dcd2-47a9-a4de-739274524be0" path="/var/lib/kubelet/pods/27cd4e7e-dcd2-47a9-a4de-739274524be0/volumes" Dec 01 10:42:55 crc kubenswrapper[4933]: I1201 10:42:55.412463 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76fbd6d8c8-kdqkn_006eae88-9faa-428f-9d0c-a9fd104b7d06/barbican-api/0.log" Dec 01 10:42:55 crc kubenswrapper[4933]: I1201 10:42:55.736230 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76fbd6d8c8-kdqkn_006eae88-9faa-428f-9d0c-a9fd104b7d06/barbican-api-log/0.log" Dec 01 10:42:55 crc kubenswrapper[4933]: I1201 10:42:55.799421 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b85f87c74-hvnkk_036e08a4-0b6f-498f-a851-723b07c2f687/barbican-keystone-listener/0.log" Dec 01 10:42:55 crc kubenswrapper[4933]: I1201 10:42:55.865168 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b85f87c74-hvnkk_036e08a4-0b6f-498f-a851-723b07c2f687/barbican-keystone-listener-log/0.log" Dec 01 10:42:56 crc kubenswrapper[4933]: I1201 10:42:56.046173 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6dd45957c5-5f9ff_19520328-8d8b-4f49-8c93-82cdfb3623c4/barbican-worker/0.log" Dec 01 10:42:56 crc kubenswrapper[4933]: I1201 10:42:56.112414 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6dd45957c5-5f9ff_19520328-8d8b-4f49-8c93-82cdfb3623c4/barbican-worker-log/0.log" Dec 01 10:42:56 crc kubenswrapper[4933]: I1201 10:42:56.330711 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-67bmd_32dfd9a4-8242-4931-a791-de1fc8b1d4a9/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:42:56 crc kubenswrapper[4933]: I1201 10:42:56.369601 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_77bda02c-44dc-4643-b6d4-4d9f32b260cb/ceilometer-central-agent/0.log" Dec 01 10:42:56 crc kubenswrapper[4933]: I1201 10:42:56.488144 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_77bda02c-44dc-4643-b6d4-4d9f32b260cb/ceilometer-notification-agent/0.log" Dec 01 10:42:56 crc kubenswrapper[4933]: I1201 10:42:56.509030 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_77bda02c-44dc-4643-b6d4-4d9f32b260cb/proxy-httpd/0.log" Dec 01 10:42:56 crc kubenswrapper[4933]: I1201 10:42:56.565482 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_77bda02c-44dc-4643-b6d4-4d9f32b260cb/sg-core/0.log" Dec 01 10:42:56 crc kubenswrapper[4933]: I1201 10:42:56.799377 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_adfbde90-8055-49f0-9ccb-83d1502332cd/cinder-api/0.log" Dec 01 10:42:56 crc kubenswrapper[4933]: I1201 10:42:56.835313 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_adfbde90-8055-49f0-9ccb-83d1502332cd/cinder-api-log/0.log" Dec 01 10:42:57 crc kubenswrapper[4933]: I1201 10:42:57.480128 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ddlr8_b4bcbb87-2840-4779-ad5a-9da140a34e9a/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:42:57 crc kubenswrapper[4933]: I1201 10:42:57.528012 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_da9e98c6-6da7-4082-8e5d-f8e571486e96/probe/0.log" Dec 01 10:42:57 crc kubenswrapper[4933]: I1201 10:42:57.541729 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_da9e98c6-6da7-4082-8e5d-f8e571486e96/cinder-scheduler/0.log" Dec 01 10:42:57 crc kubenswrapper[4933]: I1201 10:42:57.720867 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-c5hhz_eab25613-97d2-4420-875f-c5b71e62357f/init/0.log" Dec 01 10:42:57 crc kubenswrapper[4933]: I1201 10:42:57.732166 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-r68nv_5cbc2f4a-039d-45ef-9b06-1e1d59f11abb/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:42:57 crc kubenswrapper[4933]: I1201 10:42:57.942781 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-c5hhz_eab25613-97d2-4420-875f-c5b71e62357f/init/0.log" Dec 01 10:42:57 crc kubenswrapper[4933]: I1201 10:42:57.992091 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-c5hhz_eab25613-97d2-4420-875f-c5b71e62357f/dnsmasq-dns/0.log" Dec 01 10:42:58 crc kubenswrapper[4933]: I1201 10:42:58.045067 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-m52lh_85720139-3c78-4370-98fc-31899c778fd9/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:42:58 crc kubenswrapper[4933]: I1201 10:42:58.192371 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3ded59f9-1443-44e5-93d0-d6fbc126c384/glance-httpd/0.log" Dec 01 10:42:58 crc kubenswrapper[4933]: I1201 10:42:58.206426 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3ded59f9-1443-44e5-93d0-d6fbc126c384/glance-log/0.log" Dec 01 10:42:58 crc kubenswrapper[4933]: I1201 10:42:58.434544 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_45de9ced-9212-422d-9433-2a543d75f37f/glance-httpd/0.log" Dec 01 10:42:58 crc kubenswrapper[4933]: I1201 10:42:58.494151 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_45de9ced-9212-422d-9433-2a543d75f37f/glance-log/0.log" Dec 01 10:42:58 crc kubenswrapper[4933]: I1201 10:42:58.757398 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-75479c6864-2fvz5_000656f6-99fd-43a3-8ade-31b200d0c18a/horizon/1.log" Dec 01 10:42:58 crc kubenswrapper[4933]: I1201 10:42:58.806131 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-75479c6864-2fvz5_000656f6-99fd-43a3-8ade-31b200d0c18a/horizon/0.log" Dec 01 10:42:59 crc kubenswrapper[4933]: I1201 10:42:59.184915 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-75479c6864-2fvz5_000656f6-99fd-43a3-8ade-31b200d0c18a/horizon-log/0.log" Dec 01 10:42:59 crc kubenswrapper[4933]: I1201 10:42:59.506081 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8qwvq_c212c516-4550-436c-8864-c1ff02cf5b14/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:42:59 crc kubenswrapper[4933]: I1201 10:42:59.538224 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-zsk4f_4ddb223b-7a15-4443-8347-19763927dc95/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:42:59 crc kubenswrapper[4933]: I1201 10:42:59.690388 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6fd8c5dc6c-czndt_ea85bb6c-bf92-4f66-8068-8ccc7536bdb4/keystone-api/0.log" Dec 01 10:42:59 crc kubenswrapper[4933]: I1201 10:42:59.759295 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29409721-9h79w_e4069c3e-e0a1-4aa7-b54d-a040272d3db4/keystone-cron/0.log" Dec 01 10:42:59 crc kubenswrapper[4933]: I1201 10:42:59.929497 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_665307e0-fe7b-411a-b394-d383671c8809/kube-state-metrics/0.log" Dec 01 10:43:00 crc kubenswrapper[4933]: I1201 10:43:00.056412 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-xv7tj_db1900b9-3716-46b2-9761-18a6721bd258/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:43:00 crc kubenswrapper[4933]: I1201 10:43:00.377874 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5dd758bcf-r4prx_39d17922-6634-497e-9dab-330fcbde16fe/neutron-httpd/0.log" Dec 01 10:43:00 crc kubenswrapper[4933]: I1201 10:43:00.422389 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5dd758bcf-r4prx_39d17922-6634-497e-9dab-330fcbde16fe/neutron-api/0.log" Dec 01 10:43:00 crc kubenswrapper[4933]: I1201 10:43:00.528449 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-khppq_5242466a-3061-4db5-b9dd-77f6bff70350/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:43:01 crc kubenswrapper[4933]: I1201 10:43:01.085760 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265/nova-api-log/0.log" Dec 01 10:43:01 crc kubenswrapper[4933]: I1201 10:43:01.139611 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b9107505-ec32-479e-b76e-1ffa605a3bfb/nova-cell0-conductor-conductor/0.log" Dec 01 10:43:01 crc kubenswrapper[4933]: I1201 10:43:01.503017 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_80bcbf70-b098-48a4-9dfe-fdb8c3b87e8e/nova-cell1-conductor-conductor/0.log" Dec 01 10:43:01 crc kubenswrapper[4933]: I1201 10:43:01.521439 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6c08ac8e-6639-413d-8534-625fa6adc9ae/nova-cell1-novncproxy-novncproxy/0.log" Dec 01 10:43:01 crc kubenswrapper[4933]: I1201 10:43:01.572377 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b8e55a9c-44f6-4b6b-94a4-9d9d4b50f265/nova-api-api/0.log" Dec 01 10:43:01 crc kubenswrapper[4933]: I1201 10:43:01.814830 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-69txc_e9ac33c2-a83f-4ec8-8458-b366a2aebd5d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:43:01 crc kubenswrapper[4933]: I1201 10:43:01.917185 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_55378084-cbcf-4c0c-8bdc-c9d2f026ca3c/nova-metadata-log/0.log" Dec 01 10:43:02 crc kubenswrapper[4933]: I1201 10:43:02.280639 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0283919c-d007-4102-a7dd-33bd1388971c/nova-scheduler-scheduler/0.log" Dec 01 10:43:02 crc kubenswrapper[4933]: I1201 10:43:02.294758 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_55a605ff-7d52-4d80-bd32-6301d0c696c1/mysql-bootstrap/0.log" Dec 01 10:43:02 crc kubenswrapper[4933]: I1201 10:43:02.488116 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_55a605ff-7d52-4d80-bd32-6301d0c696c1/mysql-bootstrap/0.log" Dec 01 10:43:02 crc kubenswrapper[4933]: I1201 10:43:02.542612 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_55a605ff-7d52-4d80-bd32-6301d0c696c1/galera/0.log" Dec 01 10:43:02 crc kubenswrapper[4933]: I1201 10:43:02.712416 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1ba27bfa-74d8-4df5-8217-666a02132516/mysql-bootstrap/0.log" Dec 01 10:43:02 crc kubenswrapper[4933]: I1201 10:43:02.905140 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1ba27bfa-74d8-4df5-8217-666a02132516/mysql-bootstrap/0.log" Dec 01 10:43:02 crc kubenswrapper[4933]: I1201 10:43:02.988711 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1ba27bfa-74d8-4df5-8217-666a02132516/galera/0.log" Dec 01 10:43:03 crc kubenswrapper[4933]: I1201 10:43:03.154051 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7ad829f6-4a62-4ed2-a99c-30aed564d585/openstackclient/0.log" Dec 01 10:43:03 crc kubenswrapper[4933]: I1201 10:43:03.234032 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5tgrr_a1f2d08e-94f8-47ec-9e7e-a4722b71b609/ovn-controller/0.log" Dec 01 10:43:03 crc kubenswrapper[4933]: I1201 10:43:03.424767 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qxwjl_1db280e5-ecd7-44cf-933a-2d55ba6f7b42/openstack-network-exporter/0.log" Dec 01 10:43:03 crc kubenswrapper[4933]: I1201 10:43:03.466468 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_55378084-cbcf-4c0c-8bdc-c9d2f026ca3c/nova-metadata-metadata/0.log" Dec 01 10:43:03 crc kubenswrapper[4933]: I1201 10:43:03.621848 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8bgh_90984aaa-e287-4038-bc14-16debb186a8d/ovsdb-server-init/0.log" Dec 01 10:43:03 crc kubenswrapper[4933]: I1201 10:43:03.884071 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8bgh_90984aaa-e287-4038-bc14-16debb186a8d/ovs-vswitchd/0.log" Dec 01 10:43:03 crc kubenswrapper[4933]: I1201 10:43:03.887537 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8bgh_90984aaa-e287-4038-bc14-16debb186a8d/ovsdb-server-init/0.log" Dec 01 10:43:03 crc kubenswrapper[4933]: I1201 10:43:03.893483 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8bgh_90984aaa-e287-4038-bc14-16debb186a8d/ovsdb-server/0.log" Dec 01 10:43:04 crc kubenswrapper[4933]: I1201 10:43:04.135692 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9dc35964-1186-483a-8904-c98af6497c53/openstack-network-exporter/0.log" Dec 01 10:43:04 crc kubenswrapper[4933]: I1201 10:43:04.223627 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-s6cr9_f053536c-b281-4870-b827-93d59be1fdbd/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:43:04 crc kubenswrapper[4933]: I1201 10:43:04.271008 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9dc35964-1186-483a-8904-c98af6497c53/ovn-northd/0.log" Dec 01 10:43:04 crc kubenswrapper[4933]: I1201 10:43:04.364945 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fa3b7950-1309-47f9-9372-7932d0ef0ced/openstack-network-exporter/0.log" Dec 01 10:43:04 crc kubenswrapper[4933]: I1201 10:43:04.456532 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fa3b7950-1309-47f9-9372-7932d0ef0ced/ovsdbserver-nb/0.log" Dec 01 10:43:04 crc kubenswrapper[4933]: I1201 10:43:04.623066 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_75ef03e2-9526-4184-a3cf-2a5bb26fec93/openstack-network-exporter/0.log" Dec 01 10:43:04 crc kubenswrapper[4933]: I1201 10:43:04.755690 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_75ef03e2-9526-4184-a3cf-2a5bb26fec93/ovsdbserver-sb/0.log" Dec 01 10:43:04 crc kubenswrapper[4933]: I1201 10:43:04.822637 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55788c59f6-zd5sp_c2caece4-8b42-4e68-9a5d-096ef39b4120/placement-api/0.log" Dec 01 10:43:04 crc kubenswrapper[4933]: I1201 10:43:04.987608 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55788c59f6-zd5sp_c2caece4-8b42-4e68-9a5d-096ef39b4120/placement-log/0.log" Dec 01 10:43:05 crc kubenswrapper[4933]: I1201 10:43:05.087625 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3ac85014-ac29-45f6-9461-a8c02c4fcca4/setup-container/0.log" Dec 01 10:43:05 crc kubenswrapper[4933]: I1201 10:43:05.301562 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3ac85014-ac29-45f6-9461-a8c02c4fcca4/rabbitmq/0.log" Dec 01 10:43:05 crc kubenswrapper[4933]: I1201 10:43:05.310389 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3ac85014-ac29-45f6-9461-a8c02c4fcca4/setup-container/0.log" Dec 01 10:43:05 crc kubenswrapper[4933]: I1201 10:43:05.397591 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4b205db3-c812-4f4e-a81c-3662f2ca0cf1/setup-container/0.log" Dec 01 10:43:05 crc kubenswrapper[4933]: I1201 10:43:05.611289 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4b205db3-c812-4f4e-a81c-3662f2ca0cf1/rabbitmq/0.log" Dec 01 10:43:05 crc kubenswrapper[4933]: I1201 10:43:05.684913 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4b205db3-c812-4f4e-a81c-3662f2ca0cf1/setup-container/0.log" Dec 01 10:43:05 crc kubenswrapper[4933]: I1201 10:43:05.696803 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wpv4r_6a84779c-7b89-4a0c-9ea0-34d0af08979d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:43:05 crc kubenswrapper[4933]: I1201 10:43:05.911474 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-tkd78_d9cb819c-73da-4725-aaca-3cac78b4670f/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:43:05 crc kubenswrapper[4933]: I1201 10:43:05.986947 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-grj94_af261b96-cdfe-4987-8689-bec0506287d2/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:43:06 crc kubenswrapper[4933]: I1201 10:43:06.210266 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-78bfv_c57f613c-9cc6-447a-acf6-11a2d381862f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:43:06 crc kubenswrapper[4933]: I1201 10:43:06.353578 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gt2rs_d2735060-c736-46d2-882c-60c0a7e96bc8/ssh-known-hosts-edpm-deployment/0.log" Dec 01 10:43:06 crc kubenswrapper[4933]: I1201 10:43:06.596322 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c69467867-495s4_4ceb0496-c824-4d20-8a63-43bc6aa47f97/proxy-server/0.log" Dec 01 10:43:06 crc kubenswrapper[4933]: I1201 10:43:06.647713 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c69467867-495s4_4ceb0496-c824-4d20-8a63-43bc6aa47f97/proxy-httpd/0.log" Dec 01 10:43:06 crc kubenswrapper[4933]: I1201 10:43:06.730515 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wpgrh_fa13e197-3320-4314-86ee-a1b90292ab1d/swift-ring-rebalance/0.log" Dec 01 10:43:06 crc kubenswrapper[4933]: I1201 10:43:06.848957 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/account-auditor/0.log" Dec 01 10:43:06 crc kubenswrapper[4933]: I1201 10:43:06.916707 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/account-reaper/0.log" Dec 01 10:43:07 crc kubenswrapper[4933]: I1201 10:43:07.020355 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/account-replicator/0.log" Dec 01 10:43:07 crc kubenswrapper[4933]: I1201 10:43:07.101808 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/account-server/0.log" Dec 01 10:43:07 crc kubenswrapper[4933]: I1201 10:43:07.127726 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/container-auditor/0.log" Dec 01 10:43:07 crc kubenswrapper[4933]: I1201 10:43:07.181040 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/container-replicator/0.log" Dec 01 10:43:07 crc kubenswrapper[4933]: I1201 10:43:07.335166 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/container-server/0.log" Dec 01 10:43:07 crc kubenswrapper[4933]: I1201 10:43:07.388966 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/container-updater/0.log" Dec 01 10:43:07 crc kubenswrapper[4933]: I1201 10:43:07.415513 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/object-auditor/0.log" Dec 01 10:43:07 crc kubenswrapper[4933]: I1201 10:43:07.452640 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/object-expirer/0.log" Dec 01 10:43:07 crc kubenswrapper[4933]: I1201 10:43:07.611354 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/object-updater/0.log" Dec 01 10:43:07 crc kubenswrapper[4933]: I1201 10:43:07.636583 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/object-replicator/0.log" Dec 01 10:43:07 crc kubenswrapper[4933]: I1201 10:43:07.682410 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/object-server/0.log" Dec 01 10:43:07 crc kubenswrapper[4933]: I1201 10:43:07.861889 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/rsync/0.log" Dec 01 10:43:08 crc kubenswrapper[4933]: I1201 10:43:08.456859 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_59f78861-3fff-42c4-9592-4eb047ea6a88/swift-recon-cron/0.log" Dec 01 10:43:08 crc kubenswrapper[4933]: I1201 10:43:08.544806 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qxssd_eab7fc1e-f6ce-41a3-9a65-1773b1c2e823/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:43:08 crc kubenswrapper[4933]: I1201 10:43:08.689836 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_c272594d-4d61-490a-a44d-0a82106c9a1f/tempest-tests-tempest-tests-runner/0.log" Dec 01 10:43:08 crc kubenswrapper[4933]: I1201 10:43:08.767900 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_265d92de-a6f0-45ea-9175-15a4bb7c1716/test-operator-logs-container/0.log" Dec 01 10:43:08 crc kubenswrapper[4933]: I1201 10:43:08.962704 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-x7ljl_af544a25-e743-4edc-8d80-228c9da3ce45/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 10:43:11 crc kubenswrapper[4933]: I1201 10:43:11.741433 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:43:11 crc kubenswrapper[4933]: I1201 10:43:11.741788 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:43:20 crc kubenswrapper[4933]: I1201 10:43:20.335745 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_12be8eec-c6b1-4606-83de-e19ac2ab17eb/memcached/0.log" Dec 01 10:43:38 crc kubenswrapper[4933]: I1201 10:43:38.596930 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/util/0.log" Dec 01 10:43:38 crc kubenswrapper[4933]: I1201 10:43:38.821239 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/util/0.log" Dec 01 10:43:38 crc kubenswrapper[4933]: I1201 10:43:38.858394 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/pull/0.log" Dec 01 10:43:38 crc kubenswrapper[4933]: I1201 10:43:38.879978 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/pull/0.log" Dec 01 10:43:39 crc kubenswrapper[4933]: I1201 10:43:39.024598 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/pull/0.log" Dec 01 10:43:39 crc kubenswrapper[4933]: I1201 10:43:39.087015 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/util/0.log" Dec 01 10:43:39 crc kubenswrapper[4933]: I1201 10:43:39.093590 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_898359261d756a0507f7ba7e275dc0acf73d1ef6eb8026a7db6673506ez7lb7_e9c7ed8e-3041-437c-a3f0-b8c2cf94c503/extract/0.log" Dec 01 10:43:39 crc kubenswrapper[4933]: I1201 10:43:39.256112 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-9fvkr_e1f14086-5509-48fe-a88c-c2717009ef93/kube-rbac-proxy/0.log" Dec 01 10:43:39 crc kubenswrapper[4933]: I1201 10:43:39.351907 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-fntw7_19b19877-3b1b-40f9-9501-329bceb4756a/kube-rbac-proxy/0.log" Dec 01 10:43:39 crc kubenswrapper[4933]: I1201 10:43:39.402379 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-9fvkr_e1f14086-5509-48fe-a88c-c2717009ef93/manager/0.log" Dec 01 10:43:39 crc kubenswrapper[4933]: I1201 10:43:39.498825 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-fntw7_19b19877-3b1b-40f9-9501-329bceb4756a/manager/0.log" Dec 01 10:43:39 crc kubenswrapper[4933]: I1201 10:43:39.651380 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-cpthv_9c52b072-b528-4fee-88b8-c878150882b1/kube-rbac-proxy/0.log" Dec 01 10:43:39 crc kubenswrapper[4933]: I1201 10:43:39.704706 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-cpthv_9c52b072-b528-4fee-88b8-c878150882b1/manager/0.log" Dec 01 10:43:39 crc kubenswrapper[4933]: I1201 10:43:39.870225 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-4bfbh_e88cb01f-84f3-4cdc-9d5d-f283f883868e/kube-rbac-proxy/0.log" Dec 01 10:43:39 crc kubenswrapper[4933]: I1201 10:43:39.980786 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-4bfbh_e88cb01f-84f3-4cdc-9d5d-f283f883868e/manager/0.log" Dec 01 10:43:40 crc kubenswrapper[4933]: I1201 10:43:40.006568 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-6q6m6_9564306d-6348-40b4-9e3e-42fcd5778383/kube-rbac-proxy/0.log" Dec 01 10:43:40 crc kubenswrapper[4933]: I1201 10:43:40.113661 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-6q6m6_9564306d-6348-40b4-9e3e-42fcd5778383/manager/0.log" Dec 01 10:43:40 crc kubenswrapper[4933]: I1201 10:43:40.242365 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-gd76x_96d92174-459d-4657-bbbb-a56271877411/kube-rbac-proxy/0.log" Dec 01 10:43:40 crc kubenswrapper[4933]: I1201 10:43:40.361940 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-gd76x_96d92174-459d-4657-bbbb-a56271877411/manager/0.log" Dec 01 10:43:40 crc kubenswrapper[4933]: I1201 10:43:40.523328 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-hcgq6_7dd39823-94d3-4a96-90e4-ada73223c4b0/kube-rbac-proxy/0.log" Dec 01 10:43:40 crc kubenswrapper[4933]: I1201 10:43:40.752720 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-7rjkh_eefc3c9c-eade-4b6e-8902-6936d481cb1b/kube-rbac-proxy/0.log" Dec 01 10:43:40 crc kubenswrapper[4933]: I1201 10:43:40.759797 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-hcgq6_7dd39823-94d3-4a96-90e4-ada73223c4b0/manager/0.log" Dec 01 10:43:40 crc kubenswrapper[4933]: I1201 10:43:40.794564 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-7rjkh_eefc3c9c-eade-4b6e-8902-6936d481cb1b/manager/0.log" Dec 01 10:43:40 crc kubenswrapper[4933]: I1201 10:43:40.992408 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-94gt2_b303701b-30bc-4779-b1fa-f574bd6cce65/kube-rbac-proxy/0.log" Dec 01 10:43:41 crc kubenswrapper[4933]: I1201 10:43:41.101490 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-94gt2_b303701b-30bc-4779-b1fa-f574bd6cce65/manager/0.log" Dec 01 10:43:41 crc kubenswrapper[4933]: I1201 10:43:41.158841 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-mlmgw_b925c282-ee4d-4b1f-8f18-d3baa2f8faef/kube-rbac-proxy/0.log" Dec 01 10:43:41 crc kubenswrapper[4933]: I1201 10:43:41.226373 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-mlmgw_b925c282-ee4d-4b1f-8f18-d3baa2f8faef/manager/0.log" Dec 01 10:43:41 crc kubenswrapper[4933]: I1201 10:43:41.342929 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-n5pnz_e32cc225-71ff-4edf-8e11-ac7abf7afe27/kube-rbac-proxy/0.log" Dec 01 10:43:41 crc kubenswrapper[4933]: I1201 10:43:41.393047 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-n5pnz_e32cc225-71ff-4edf-8e11-ac7abf7afe27/manager/0.log" Dec 01 10:43:41 crc kubenswrapper[4933]: I1201 10:43:41.558810 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-8rdcd_c10a734c-970c-42dd-aa15-a27dd68941e1/kube-rbac-proxy/0.log" Dec 01 10:43:41 crc kubenswrapper[4933]: I1201 10:43:41.599804 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-8rdcd_c10a734c-970c-42dd-aa15-a27dd68941e1/manager/0.log" Dec 01 10:43:41 crc kubenswrapper[4933]: I1201 10:43:41.729492 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-w8tzl_9a84bd2a-303d-492c-b507-61fa590290d1/kube-rbac-proxy/0.log" Dec 01 10:43:41 crc kubenswrapper[4933]: I1201 10:43:41.741028 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:43:41 crc kubenswrapper[4933]: I1201 10:43:41.741097 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:43:41 crc kubenswrapper[4933]: I1201 10:43:41.848941 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-w8tzl_9a84bd2a-303d-492c-b507-61fa590290d1/manager/0.log" Dec 01 10:43:41 crc kubenswrapper[4933]: I1201 10:43:41.874940 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-lxkkf_a8f52d69-0961-4ac0-b41f-200400bfcf2b/kube-rbac-proxy/0.log" Dec 01 10:43:41 crc kubenswrapper[4933]: I1201 10:43:41.987206 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-lxkkf_a8f52d69-0961-4ac0-b41f-200400bfcf2b/manager/0.log" Dec 01 10:43:42 crc kubenswrapper[4933]: I1201 10:43:42.053521 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd45jrln_96699ea8-fc44-4dc2-a6f2-f2109d091097/manager/0.log" Dec 01 10:43:42 crc kubenswrapper[4933]: I1201 10:43:42.065270 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd45jrln_96699ea8-fc44-4dc2-a6f2-f2109d091097/kube-rbac-proxy/0.log" Dec 01 10:43:42 crc kubenswrapper[4933]: I1201 10:43:42.589918 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-66ff97f68c-jqgr4_2a175cf5-68b0-46ab-9e64-646af044da97/operator/0.log" Dec 01 10:43:42 crc kubenswrapper[4933]: I1201 10:43:42.620551 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-vnqpf_61a68407-8b55-4951-aa9b-8f2348e5b3b1/registry-server/0.log" Dec 01 10:43:42 crc kubenswrapper[4933]: I1201 10:43:42.818480 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-7c9rv_2550654d-3a84-420e-bcaa-75a2f3c88dec/kube-rbac-proxy/0.log" Dec 01 10:43:42 crc kubenswrapper[4933]: I1201 10:43:42.877890 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-7c9rv_2550654d-3a84-420e-bcaa-75a2f3c88dec/manager/0.log" Dec 01 10:43:42 crc kubenswrapper[4933]: I1201 10:43:42.935149 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-w92f7_83542dc0-212d-4257-935c-aced954e9157/kube-rbac-proxy/0.log" Dec 01 10:43:43 crc kubenswrapper[4933]: I1201 10:43:43.133746 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-w92f7_83542dc0-212d-4257-935c-aced954e9157/manager/0.log" Dec 01 10:43:43 crc kubenswrapper[4933]: I1201 10:43:43.190219 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-dhlrp_3aa898e5-9bf0-4baf-9c71-261229f0baf0/operator/0.log" Dec 01 10:43:43 crc kubenswrapper[4933]: I1201 10:43:43.393298 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-frx4s_0bd5ca15-126a-4c31-814b-b0390dc01b3c/kube-rbac-proxy/0.log" Dec 01 10:43:43 crc kubenswrapper[4933]: I1201 10:43:43.465433 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-547ff67f67-9fnsd_c976f88e-97eb-4223-9475-252505656b6d/manager/0.log" Dec 01 10:43:43 crc kubenswrapper[4933]: I1201 10:43:43.502141 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-frx4s_0bd5ca15-126a-4c31-814b-b0390dc01b3c/manager/0.log" Dec 01 10:43:43 crc kubenswrapper[4933]: I1201 10:43:43.517557 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-b2gcw_6c192ef8-b774-486f-bb69-d73e8b89989e/kube-rbac-proxy/0.log" Dec 01 10:43:43 crc kubenswrapper[4933]: I1201 10:43:43.707218 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-b2gcw_6c192ef8-b774-486f-bb69-d73e8b89989e/manager/0.log" Dec 01 10:43:43 crc kubenswrapper[4933]: I1201 10:43:43.742926 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-w9jcs_c807406f-80fb-422b-a68f-e9706da2ac42/kube-rbac-proxy/0.log" Dec 01 10:43:43 crc kubenswrapper[4933]: I1201 10:43:43.774972 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-w9jcs_c807406f-80fb-422b-a68f-e9706da2ac42/manager/0.log" Dec 01 10:43:43 crc kubenswrapper[4933]: I1201 10:43:43.919404 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-bmhhw_48cfc1f9-dbcb-4ff7-88b7-aa7709648627/manager/0.log" Dec 01 10:43:43 crc kubenswrapper[4933]: I1201 10:43:43.954650 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-bmhhw_48cfc1f9-dbcb-4ff7-88b7-aa7709648627/kube-rbac-proxy/0.log" Dec 01 10:44:05 crc kubenswrapper[4933]: I1201 10:44:05.782712 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6dwg7_a7bd5924-9a3f-43cf-99b1-2d5d20975f81/control-plane-machine-set-operator/0.log" Dec 01 10:44:05 crc kubenswrapper[4933]: I1201 10:44:05.879645 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xdrhr_29cdc67d-6d2a-44b2-bd31-3634aff7f52e/kube-rbac-proxy/0.log" Dec 01 10:44:05 crc kubenswrapper[4933]: I1201 10:44:05.911144 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xdrhr_29cdc67d-6d2a-44b2-bd31-3634aff7f52e/machine-api-operator/0.log" Dec 01 10:44:11 crc kubenswrapper[4933]: I1201 10:44:11.741606 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:44:11 crc kubenswrapper[4933]: I1201 10:44:11.742268 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:44:11 crc kubenswrapper[4933]: I1201 10:44:11.742356 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 10:44:11 crc kubenswrapper[4933]: I1201 10:44:11.743474 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5980ab50a1619ce76cb421b80524ebb405bf0c56dae61e64a1b53628b706bcb0"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 10:44:11 crc kubenswrapper[4933]: I1201 10:44:11.743547 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://5980ab50a1619ce76cb421b80524ebb405bf0c56dae61e64a1b53628b706bcb0" gracePeriod=600 Dec 01 10:44:12 crc kubenswrapper[4933]: I1201 10:44:12.063564 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="5980ab50a1619ce76cb421b80524ebb405bf0c56dae61e64a1b53628b706bcb0" exitCode=0 Dec 01 10:44:12 crc kubenswrapper[4933]: I1201 10:44:12.063634 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"5980ab50a1619ce76cb421b80524ebb405bf0c56dae61e64a1b53628b706bcb0"} Dec 01 10:44:12 crc kubenswrapper[4933]: I1201 10:44:12.064036 4933 scope.go:117] "RemoveContainer" containerID="3883fa7db2acfc737647ffc4e0c893af05d67e800c90690c08459eb8ea17bea3" Dec 01 10:44:13 crc kubenswrapper[4933]: I1201 10:44:13.090501 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerStarted","Data":"700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb"} Dec 01 10:44:19 crc kubenswrapper[4933]: I1201 10:44:19.317898 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-ts7sf_b9576096-fd1b-4f6e-95c9-37517c77cca1/cert-manager-controller/0.log" Dec 01 10:44:20 crc kubenswrapper[4933]: I1201 10:44:20.066944 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-5b7g6_d32d2e97-81db-4119-b9c6-a71b974a56a8/cert-manager-cainjector/0.log" Dec 01 10:44:20 crc kubenswrapper[4933]: I1201 10:44:20.127821 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-x8fvf_c54106fe-eb4b-4f41-afce-e9fde8067ec8/cert-manager-webhook/0.log" Dec 01 10:44:33 crc kubenswrapper[4933]: I1201 10:44:33.515954 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-vh9f4_575c5872-9ab6-4a15-86b7-9dfbe33c0171/nmstate-console-plugin/0.log" Dec 01 10:44:33 crc kubenswrapper[4933]: I1201 10:44:33.756672 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sn9zt_b95afca6-9346-403c-b1ab-d04d36537c40/nmstate-handler/0.log" Dec 01 10:44:33 crc kubenswrapper[4933]: I1201 10:44:33.794463 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-kqnxq_18b298e1-b8df-4272-a30c-496424de8d76/kube-rbac-proxy/0.log" Dec 01 10:44:33 crc kubenswrapper[4933]: I1201 10:44:33.851767 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-kqnxq_18b298e1-b8df-4272-a30c-496424de8d76/nmstate-metrics/0.log" Dec 01 10:44:33 crc kubenswrapper[4933]: I1201 10:44:33.975786 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-l6tck_c3ce2443-f386-4203-8c92-f0961122fb6b/nmstate-operator/0.log" Dec 01 10:44:34 crc kubenswrapper[4933]: I1201 10:44:34.061806 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-8bmq8_ccec4c34-0e07-4b0a-a36a-a6bc46982fa6/nmstate-webhook/0.log" Dec 01 10:44:40 crc kubenswrapper[4933]: I1201 10:44:40.768099 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bcswt"] Dec 01 10:44:40 crc kubenswrapper[4933]: E1201 10:44:40.768993 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27cd4e7e-dcd2-47a9-a4de-739274524be0" containerName="container-00" Dec 01 10:44:40 crc kubenswrapper[4933]: I1201 10:44:40.769008 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="27cd4e7e-dcd2-47a9-a4de-739274524be0" containerName="container-00" Dec 01 10:44:40 crc kubenswrapper[4933]: I1201 10:44:40.769253 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="27cd4e7e-dcd2-47a9-a4de-739274524be0" containerName="container-00" Dec 01 10:44:40 crc kubenswrapper[4933]: I1201 10:44:40.773489 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:44:40 crc kubenswrapper[4933]: I1201 10:44:40.793338 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcswt"] Dec 01 10:44:40 crc kubenswrapper[4933]: I1201 10:44:40.872416 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f027136-6631-41cc-9f76-37f6cfcd6fac-utilities\") pod \"redhat-operators-bcswt\" (UID: \"8f027136-6631-41cc-9f76-37f6cfcd6fac\") " pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:44:40 crc kubenswrapper[4933]: I1201 10:44:40.872528 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f027136-6631-41cc-9f76-37f6cfcd6fac-catalog-content\") pod \"redhat-operators-bcswt\" (UID: \"8f027136-6631-41cc-9f76-37f6cfcd6fac\") " pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:44:40 crc kubenswrapper[4933]: I1201 10:44:40.872554 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmvrj\" (UniqueName: \"kubernetes.io/projected/8f027136-6631-41cc-9f76-37f6cfcd6fac-kube-api-access-hmvrj\") pod \"redhat-operators-bcswt\" (UID: \"8f027136-6631-41cc-9f76-37f6cfcd6fac\") " pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:44:40 crc kubenswrapper[4933]: I1201 10:44:40.974171 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f027136-6631-41cc-9f76-37f6cfcd6fac-utilities\") pod \"redhat-operators-bcswt\" (UID: \"8f027136-6631-41cc-9f76-37f6cfcd6fac\") " pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:44:40 crc kubenswrapper[4933]: I1201 10:44:40.974324 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f027136-6631-41cc-9f76-37f6cfcd6fac-catalog-content\") pod \"redhat-operators-bcswt\" (UID: \"8f027136-6631-41cc-9f76-37f6cfcd6fac\") " pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:44:40 crc kubenswrapper[4933]: I1201 10:44:40.974375 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmvrj\" (UniqueName: \"kubernetes.io/projected/8f027136-6631-41cc-9f76-37f6cfcd6fac-kube-api-access-hmvrj\") pod \"redhat-operators-bcswt\" (UID: \"8f027136-6631-41cc-9f76-37f6cfcd6fac\") " pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:44:40 crc kubenswrapper[4933]: I1201 10:44:40.975278 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f027136-6631-41cc-9f76-37f6cfcd6fac-utilities\") pod \"redhat-operators-bcswt\" (UID: \"8f027136-6631-41cc-9f76-37f6cfcd6fac\") " pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:44:40 crc kubenswrapper[4933]: I1201 10:44:40.975409 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f027136-6631-41cc-9f76-37f6cfcd6fac-catalog-content\") pod \"redhat-operators-bcswt\" (UID: \"8f027136-6631-41cc-9f76-37f6cfcd6fac\") " pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:44:41 crc kubenswrapper[4933]: I1201 10:44:41.566357 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmvrj\" (UniqueName: \"kubernetes.io/projected/8f027136-6631-41cc-9f76-37f6cfcd6fac-kube-api-access-hmvrj\") pod \"redhat-operators-bcswt\" (UID: \"8f027136-6631-41cc-9f76-37f6cfcd6fac\") " pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:44:41 crc kubenswrapper[4933]: I1201 10:44:41.718125 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:44:42 crc kubenswrapper[4933]: I1201 10:44:42.285477 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcswt"] Dec 01 10:44:42 crc kubenswrapper[4933]: I1201 10:44:42.413051 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcswt" event={"ID":"8f027136-6631-41cc-9f76-37f6cfcd6fac","Type":"ContainerStarted","Data":"fe4ba8b4d8badfdcb1b16944a4b7e2ce486864b201348105cad7b0231ed1cee1"} Dec 01 10:44:43 crc kubenswrapper[4933]: I1201 10:44:43.424282 4933 generic.go:334] "Generic (PLEG): container finished" podID="8f027136-6631-41cc-9f76-37f6cfcd6fac" containerID="9226950242bc0cb099592f8b8e224e9ca373e620206bd1deea627cbe983d3845" exitCode=0 Dec 01 10:44:43 crc kubenswrapper[4933]: I1201 10:44:43.424559 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcswt" event={"ID":"8f027136-6631-41cc-9f76-37f6cfcd6fac","Type":"ContainerDied","Data":"9226950242bc0cb099592f8b8e224e9ca373e620206bd1deea627cbe983d3845"} Dec 01 10:44:43 crc kubenswrapper[4933]: I1201 10:44:43.429246 4933 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 10:44:45 crc kubenswrapper[4933]: I1201 10:44:45.448152 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcswt" event={"ID":"8f027136-6631-41cc-9f76-37f6cfcd6fac","Type":"ContainerStarted","Data":"8499f6f4df1073ede0abdf633dd356ee33b4257dbbc811c5f356bfba45654e93"} Dec 01 10:44:47 crc kubenswrapper[4933]: I1201 10:44:47.479668 4933 generic.go:334] "Generic (PLEG): container finished" podID="8f027136-6631-41cc-9f76-37f6cfcd6fac" containerID="8499f6f4df1073ede0abdf633dd356ee33b4257dbbc811c5f356bfba45654e93" exitCode=0 Dec 01 10:44:47 crc kubenswrapper[4933]: I1201 10:44:47.480090 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcswt" event={"ID":"8f027136-6631-41cc-9f76-37f6cfcd6fac","Type":"ContainerDied","Data":"8499f6f4df1073ede0abdf633dd356ee33b4257dbbc811c5f356bfba45654e93"} Dec 01 10:44:48 crc kubenswrapper[4933]: I1201 10:44:48.493493 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcswt" event={"ID":"8f027136-6631-41cc-9f76-37f6cfcd6fac","Type":"ContainerStarted","Data":"5223bfd394fdc87104ee02ce63cf205f186ce5a33137634e787c216aeb56d3f6"} Dec 01 10:44:48 crc kubenswrapper[4933]: I1201 10:44:48.526226 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bcswt" podStartSLOduration=4.007385064 podStartE2EDuration="8.526199079s" podCreationTimestamp="2025-12-01 10:44:40 +0000 UTC" firstStartedPulling="2025-12-01 10:44:43.428870394 +0000 UTC m=+4374.070594009" lastFinishedPulling="2025-12-01 10:44:47.947684409 +0000 UTC m=+4378.589408024" observedRunningTime="2025-12-01 10:44:48.520007655 +0000 UTC m=+4379.161731290" watchObservedRunningTime="2025-12-01 10:44:48.526199079 +0000 UTC m=+4379.167922694" Dec 01 10:44:51 crc kubenswrapper[4933]: I1201 10:44:51.720334 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:44:51 crc kubenswrapper[4933]: I1201 10:44:51.721234 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:44:52 crc kubenswrapper[4933]: I1201 10:44:52.769725 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-rczrx_e9c82311-fa12-41b3-a4e2-50bca0b1c23f/kube-rbac-proxy/0.log" Dec 01 10:44:52 crc kubenswrapper[4933]: I1201 10:44:52.794509 4933 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bcswt" podUID="8f027136-6631-41cc-9f76-37f6cfcd6fac" containerName="registry-server" probeResult="failure" output=< Dec 01 10:44:52 crc kubenswrapper[4933]: timeout: failed to connect service ":50051" within 1s Dec 01 10:44:52 crc kubenswrapper[4933]: > Dec 01 10:44:52 crc kubenswrapper[4933]: I1201 10:44:52.842406 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-rczrx_e9c82311-fa12-41b3-a4e2-50bca0b1c23f/controller/0.log" Dec 01 10:44:53 crc kubenswrapper[4933]: I1201 10:44:53.084983 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-frr-files/0.log" Dec 01 10:44:53 crc kubenswrapper[4933]: I1201 10:44:53.315153 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-reloader/0.log" Dec 01 10:44:53 crc kubenswrapper[4933]: I1201 10:44:53.319284 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-metrics/0.log" Dec 01 10:44:53 crc kubenswrapper[4933]: I1201 10:44:53.359785 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-frr-files/0.log" Dec 01 10:44:53 crc kubenswrapper[4933]: I1201 10:44:53.428962 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-reloader/0.log" Dec 01 10:44:53 crc kubenswrapper[4933]: I1201 10:44:53.861893 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-reloader/0.log" Dec 01 10:44:53 crc kubenswrapper[4933]: I1201 10:44:53.921873 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-frr-files/0.log" Dec 01 10:44:53 crc kubenswrapper[4933]: I1201 10:44:53.926485 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-metrics/0.log" Dec 01 10:44:54 crc kubenswrapper[4933]: I1201 10:44:54.384547 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-metrics/0.log" Dec 01 10:44:54 crc kubenswrapper[4933]: I1201 10:44:54.683394 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-metrics/0.log" Dec 01 10:44:54 crc kubenswrapper[4933]: I1201 10:44:54.696736 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-reloader/0.log" Dec 01 10:44:54 crc kubenswrapper[4933]: I1201 10:44:54.715279 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/cp-frr-files/0.log" Dec 01 10:44:54 crc kubenswrapper[4933]: I1201 10:44:54.737449 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/controller/0.log" Dec 01 10:44:54 crc kubenswrapper[4933]: I1201 10:44:54.955866 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/kube-rbac-proxy/0.log" Dec 01 10:44:55 crc kubenswrapper[4933]: I1201 10:44:55.014793 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/frr-metrics/0.log" Dec 01 10:44:55 crc kubenswrapper[4933]: I1201 10:44:55.151416 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/kube-rbac-proxy-frr/0.log" Dec 01 10:44:55 crc kubenswrapper[4933]: I1201 10:44:55.329358 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/reloader/0.log" Dec 01 10:44:55 crc kubenswrapper[4933]: I1201 10:44:55.406744 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-kfn29_bbb71b8b-46bf-4013-93d7-f3a58f98b8f0/frr-k8s-webhook-server/0.log" Dec 01 10:44:55 crc kubenswrapper[4933]: I1201 10:44:55.695765 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d44559b9d-mg6pw_7c2e1948-244d-4059-9f94-0675dfaa751f/manager/0.log" Dec 01 10:44:55 crc kubenswrapper[4933]: I1201 10:44:55.953122 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-64d64d5bf5-hc5zj_db30f3a8-b953-4818-8999-c247744b8c1a/webhook-server/0.log" Dec 01 10:44:55 crc kubenswrapper[4933]: I1201 10:44:55.977039 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-crbpg_7ab2e723-8322-4328-8afc-4b13397a538c/kube-rbac-proxy/0.log" Dec 01 10:44:56 crc kubenswrapper[4933]: I1201 10:44:56.717708 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5hxqv_554f5a34-1fff-4264-8748-c0a8e78e9490/frr/0.log" Dec 01 10:44:56 crc kubenswrapper[4933]: I1201 10:44:56.769391 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-crbpg_7ab2e723-8322-4328-8afc-4b13397a538c/speaker/0.log" Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.203534 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9"] Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.206447 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.208723 4933 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.209764 4933 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.228978 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9"] Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.281196 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/692be73e-1e02-4fff-9063-d373705eeaf2-config-volume\") pod \"collect-profiles-29409765-vpkh9\" (UID: \"692be73e-1e02-4fff-9063-d373705eeaf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.281295 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/692be73e-1e02-4fff-9063-d373705eeaf2-secret-volume\") pod \"collect-profiles-29409765-vpkh9\" (UID: \"692be73e-1e02-4fff-9063-d373705eeaf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.281393 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v82kg\" (UniqueName: \"kubernetes.io/projected/692be73e-1e02-4fff-9063-d373705eeaf2-kube-api-access-v82kg\") pod \"collect-profiles-29409765-vpkh9\" (UID: \"692be73e-1e02-4fff-9063-d373705eeaf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.383120 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/692be73e-1e02-4fff-9063-d373705eeaf2-secret-volume\") pod \"collect-profiles-29409765-vpkh9\" (UID: \"692be73e-1e02-4fff-9063-d373705eeaf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.383213 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v82kg\" (UniqueName: \"kubernetes.io/projected/692be73e-1e02-4fff-9063-d373705eeaf2-kube-api-access-v82kg\") pod \"collect-profiles-29409765-vpkh9\" (UID: \"692be73e-1e02-4fff-9063-d373705eeaf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.383399 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/692be73e-1e02-4fff-9063-d373705eeaf2-config-volume\") pod \"collect-profiles-29409765-vpkh9\" (UID: \"692be73e-1e02-4fff-9063-d373705eeaf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.384786 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/692be73e-1e02-4fff-9063-d373705eeaf2-config-volume\") pod \"collect-profiles-29409765-vpkh9\" (UID: \"692be73e-1e02-4fff-9063-d373705eeaf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.402606 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/692be73e-1e02-4fff-9063-d373705eeaf2-secret-volume\") pod \"collect-profiles-29409765-vpkh9\" (UID: \"692be73e-1e02-4fff-9063-d373705eeaf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.411821 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v82kg\" (UniqueName: \"kubernetes.io/projected/692be73e-1e02-4fff-9063-d373705eeaf2-kube-api-access-v82kg\") pod \"collect-profiles-29409765-vpkh9\" (UID: \"692be73e-1e02-4fff-9063-d373705eeaf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" Dec 01 10:45:00 crc kubenswrapper[4933]: I1201 10:45:00.536410 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" Dec 01 10:45:01 crc kubenswrapper[4933]: I1201 10:45:01.074965 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9"] Dec 01 10:45:01 crc kubenswrapper[4933]: I1201 10:45:01.657489 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" event={"ID":"692be73e-1e02-4fff-9063-d373705eeaf2","Type":"ContainerStarted","Data":"e1521c6711c7dffd61ef1b28786657ba4fa1040d0481646edb8e45a3a2d41ddf"} Dec 01 10:45:01 crc kubenswrapper[4933]: I1201 10:45:01.658010 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" event={"ID":"692be73e-1e02-4fff-9063-d373705eeaf2","Type":"ContainerStarted","Data":"43e86f4a1bde128e50d37044bd786b438abb5b497d86cf7b4b4721ad009c78cf"} Dec 01 10:45:01 crc kubenswrapper[4933]: I1201 10:45:01.688954 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" podStartSLOduration=1.688925522 podStartE2EDuration="1.688925522s" podCreationTimestamp="2025-12-01 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 10:45:01.683855317 +0000 UTC m=+4392.325578932" watchObservedRunningTime="2025-12-01 10:45:01.688925522 +0000 UTC m=+4392.330649137" Dec 01 10:45:01 crc kubenswrapper[4933]: I1201 10:45:01.785182 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:45:01 crc kubenswrapper[4933]: I1201 10:45:01.855349 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:45:02 crc kubenswrapper[4933]: I1201 10:45:02.034770 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bcswt"] Dec 01 10:45:02 crc kubenswrapper[4933]: I1201 10:45:02.671127 4933 generic.go:334] "Generic (PLEG): container finished" podID="692be73e-1e02-4fff-9063-d373705eeaf2" containerID="e1521c6711c7dffd61ef1b28786657ba4fa1040d0481646edb8e45a3a2d41ddf" exitCode=0 Dec 01 10:45:02 crc kubenswrapper[4933]: I1201 10:45:02.671925 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" event={"ID":"692be73e-1e02-4fff-9063-d373705eeaf2","Type":"ContainerDied","Data":"e1521c6711c7dffd61ef1b28786657ba4fa1040d0481646edb8e45a3a2d41ddf"} Dec 01 10:45:03 crc kubenswrapper[4933]: I1201 10:45:03.682247 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bcswt" podUID="8f027136-6631-41cc-9f76-37f6cfcd6fac" containerName="registry-server" containerID="cri-o://5223bfd394fdc87104ee02ce63cf205f186ce5a33137634e787c216aeb56d3f6" gracePeriod=2 Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.488959 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.619566 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/692be73e-1e02-4fff-9063-d373705eeaf2-secret-volume\") pod \"692be73e-1e02-4fff-9063-d373705eeaf2\" (UID: \"692be73e-1e02-4fff-9063-d373705eeaf2\") " Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.619768 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/692be73e-1e02-4fff-9063-d373705eeaf2-config-volume\") pod \"692be73e-1e02-4fff-9063-d373705eeaf2\" (UID: \"692be73e-1e02-4fff-9063-d373705eeaf2\") " Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.619803 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v82kg\" (UniqueName: \"kubernetes.io/projected/692be73e-1e02-4fff-9063-d373705eeaf2-kube-api-access-v82kg\") pod \"692be73e-1e02-4fff-9063-d373705eeaf2\" (UID: \"692be73e-1e02-4fff-9063-d373705eeaf2\") " Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.620957 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/692be73e-1e02-4fff-9063-d373705eeaf2-config-volume" (OuterVolumeSpecName: "config-volume") pod "692be73e-1e02-4fff-9063-d373705eeaf2" (UID: "692be73e-1e02-4fff-9063-d373705eeaf2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.640991 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692be73e-1e02-4fff-9063-d373705eeaf2-kube-api-access-v82kg" (OuterVolumeSpecName: "kube-api-access-v82kg") pod "692be73e-1e02-4fff-9063-d373705eeaf2" (UID: "692be73e-1e02-4fff-9063-d373705eeaf2"). InnerVolumeSpecName "kube-api-access-v82kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.642970 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692be73e-1e02-4fff-9063-d373705eeaf2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "692be73e-1e02-4fff-9063-d373705eeaf2" (UID: "692be73e-1e02-4fff-9063-d373705eeaf2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.700756 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" event={"ID":"692be73e-1e02-4fff-9063-d373705eeaf2","Type":"ContainerDied","Data":"43e86f4a1bde128e50d37044bd786b438abb5b497d86cf7b4b4721ad009c78cf"} Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.700806 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43e86f4a1bde128e50d37044bd786b438abb5b497d86cf7b4b4721ad009c78cf" Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.700897 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409765-vpkh9" Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.706010 4933 generic.go:334] "Generic (PLEG): container finished" podID="8f027136-6631-41cc-9f76-37f6cfcd6fac" containerID="5223bfd394fdc87104ee02ce63cf205f186ce5a33137634e787c216aeb56d3f6" exitCode=0 Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.706043 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcswt" event={"ID":"8f027136-6631-41cc-9f76-37f6cfcd6fac","Type":"ContainerDied","Data":"5223bfd394fdc87104ee02ce63cf205f186ce5a33137634e787c216aeb56d3f6"} Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.706062 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcswt" event={"ID":"8f027136-6631-41cc-9f76-37f6cfcd6fac","Type":"ContainerDied","Data":"fe4ba8b4d8badfdcb1b16944a4b7e2ce486864b201348105cad7b0231ed1cee1"} Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.706074 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe4ba8b4d8badfdcb1b16944a4b7e2ce486864b201348105cad7b0231ed1cee1" Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.722181 4933 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/692be73e-1e02-4fff-9063-d373705eeaf2-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.722203 4933 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/692be73e-1e02-4fff-9063-d373705eeaf2-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.722215 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v82kg\" (UniqueName: \"kubernetes.io/projected/692be73e-1e02-4fff-9063-d373705eeaf2-kube-api-access-v82kg\") on node \"crc\" DevicePath \"\"" Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.750591 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.776477 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj"] Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.795418 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409720-6wbwj"] Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.925761 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmvrj\" (UniqueName: \"kubernetes.io/projected/8f027136-6631-41cc-9f76-37f6cfcd6fac-kube-api-access-hmvrj\") pod \"8f027136-6631-41cc-9f76-37f6cfcd6fac\" (UID: \"8f027136-6631-41cc-9f76-37f6cfcd6fac\") " Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.925866 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f027136-6631-41cc-9f76-37f6cfcd6fac-catalog-content\") pod \"8f027136-6631-41cc-9f76-37f6cfcd6fac\" (UID: \"8f027136-6631-41cc-9f76-37f6cfcd6fac\") " Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.925904 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f027136-6631-41cc-9f76-37f6cfcd6fac-utilities\") pod \"8f027136-6631-41cc-9f76-37f6cfcd6fac\" (UID: \"8f027136-6631-41cc-9f76-37f6cfcd6fac\") " Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.927077 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f027136-6631-41cc-9f76-37f6cfcd6fac-utilities" (OuterVolumeSpecName: "utilities") pod "8f027136-6631-41cc-9f76-37f6cfcd6fac" (UID: "8f027136-6631-41cc-9f76-37f6cfcd6fac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:45:04 crc kubenswrapper[4933]: I1201 10:45:04.933369 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f027136-6631-41cc-9f76-37f6cfcd6fac-kube-api-access-hmvrj" (OuterVolumeSpecName: "kube-api-access-hmvrj") pod "8f027136-6631-41cc-9f76-37f6cfcd6fac" (UID: "8f027136-6631-41cc-9f76-37f6cfcd6fac"). InnerVolumeSpecName "kube-api-access-hmvrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:45:05 crc kubenswrapper[4933]: I1201 10:45:05.028862 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmvrj\" (UniqueName: \"kubernetes.io/projected/8f027136-6631-41cc-9f76-37f6cfcd6fac-kube-api-access-hmvrj\") on node \"crc\" DevicePath \"\"" Dec 01 10:45:05 crc kubenswrapper[4933]: I1201 10:45:05.028909 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f027136-6631-41cc-9f76-37f6cfcd6fac-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:45:05 crc kubenswrapper[4933]: I1201 10:45:05.043161 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f027136-6631-41cc-9f76-37f6cfcd6fac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f027136-6631-41cc-9f76-37f6cfcd6fac" (UID: "8f027136-6631-41cc-9f76-37f6cfcd6fac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:45:05 crc kubenswrapper[4933]: I1201 10:45:05.131911 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f027136-6631-41cc-9f76-37f6cfcd6fac-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:45:05 crc kubenswrapper[4933]: I1201 10:45:05.683717 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4f18f49-2a43-429f-8406-5177afaeacfd" path="/var/lib/kubelet/pods/a4f18f49-2a43-429f-8406-5177afaeacfd/volumes" Dec 01 10:45:05 crc kubenswrapper[4933]: I1201 10:45:05.715548 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcswt" Dec 01 10:45:05 crc kubenswrapper[4933]: I1201 10:45:05.751038 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bcswt"] Dec 01 10:45:05 crc kubenswrapper[4933]: I1201 10:45:05.761208 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bcswt"] Dec 01 10:45:07 crc kubenswrapper[4933]: I1201 10:45:07.682722 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f027136-6631-41cc-9f76-37f6cfcd6fac" path="/var/lib/kubelet/pods/8f027136-6631-41cc-9f76-37f6cfcd6fac/volumes" Dec 01 10:45:12 crc kubenswrapper[4933]: I1201 10:45:12.348033 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/util/0.log" Dec 01 10:45:12 crc kubenswrapper[4933]: I1201 10:45:12.527756 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/util/0.log" Dec 01 10:45:12 crc kubenswrapper[4933]: I1201 10:45:12.537091 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/pull/0.log" Dec 01 10:45:12 crc kubenswrapper[4933]: I1201 10:45:12.537174 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/pull/0.log" Dec 01 10:45:12 crc kubenswrapper[4933]: I1201 10:45:12.770715 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/util/0.log" Dec 01 10:45:12 crc kubenswrapper[4933]: I1201 10:45:12.793920 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/extract/0.log" Dec 01 10:45:12 crc kubenswrapper[4933]: I1201 10:45:12.802284 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhqfbm_61cb8d89-75c1-4be3-9c9b-ff1337d73c4e/pull/0.log" Dec 01 10:45:13 crc kubenswrapper[4933]: I1201 10:45:13.004083 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/util/0.log" Dec 01 10:45:13 crc kubenswrapper[4933]: I1201 10:45:13.218632 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/pull/0.log" Dec 01 10:45:13 crc kubenswrapper[4933]: I1201 10:45:13.225040 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/pull/0.log" Dec 01 10:45:13 crc kubenswrapper[4933]: I1201 10:45:13.236130 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/util/0.log" Dec 01 10:45:13 crc kubenswrapper[4933]: I1201 10:45:13.482820 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/util/0.log" Dec 01 10:45:13 crc kubenswrapper[4933]: I1201 10:45:13.499823 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/pull/0.log" Dec 01 10:45:13 crc kubenswrapper[4933]: I1201 10:45:13.544567 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2pd4_dccc76f6-8c6b-4e59-b7a1-0b0892183838/extract/0.log" Dec 01 10:45:13 crc kubenswrapper[4933]: I1201 10:45:13.668531 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/extract-utilities/0.log" Dec 01 10:45:13 crc kubenswrapper[4933]: I1201 10:45:13.901719 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/extract-utilities/0.log" Dec 01 10:45:13 crc kubenswrapper[4933]: I1201 10:45:13.902018 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/extract-content/0.log" Dec 01 10:45:13 crc kubenswrapper[4933]: I1201 10:45:13.939204 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/extract-content/0.log" Dec 01 10:45:14 crc kubenswrapper[4933]: I1201 10:45:14.185088 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/extract-content/0.log" Dec 01 10:45:14 crc kubenswrapper[4933]: I1201 10:45:14.204524 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/extract-utilities/0.log" Dec 01 10:45:14 crc kubenswrapper[4933]: I1201 10:45:14.465166 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/extract-utilities/0.log" Dec 01 10:45:14 crc kubenswrapper[4933]: I1201 10:45:14.786386 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/extract-content/0.log" Dec 01 10:45:14 crc kubenswrapper[4933]: I1201 10:45:14.830987 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/extract-utilities/0.log" Dec 01 10:45:14 crc kubenswrapper[4933]: I1201 10:45:14.903631 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/extract-content/0.log" Dec 01 10:45:15 crc kubenswrapper[4933]: I1201 10:45:15.049903 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/extract-utilities/0.log" Dec 01 10:45:15 crc kubenswrapper[4933]: I1201 10:45:15.102675 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/extract-content/0.log" Dec 01 10:45:15 crc kubenswrapper[4933]: I1201 10:45:15.163417 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4hfd4_e1c51de3-51f6-4fb3-9800-fb97313a6212/registry-server/0.log" Dec 01 10:45:15 crc kubenswrapper[4933]: I1201 10:45:15.363335 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8w4rb_972d2150-cea0-4c55-9be0-bc7022d630e2/marketplace-operator/0.log" Dec 01 10:45:15 crc kubenswrapper[4933]: I1201 10:45:15.542093 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gpq4t_a5754f16-70b2-4f07-96ae-5233861175bb/registry-server/0.log" Dec 01 10:45:15 crc kubenswrapper[4933]: I1201 10:45:15.581218 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/extract-utilities/0.log" Dec 01 10:45:16 crc kubenswrapper[4933]: I1201 10:45:16.463775 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/extract-content/0.log" Dec 01 10:45:16 crc kubenswrapper[4933]: I1201 10:45:16.469627 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/extract-utilities/0.log" Dec 01 10:45:16 crc kubenswrapper[4933]: I1201 10:45:16.492275 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/extract-content/0.log" Dec 01 10:45:16 crc kubenswrapper[4933]: I1201 10:45:16.675196 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/extract-content/0.log" Dec 01 10:45:16 crc kubenswrapper[4933]: I1201 10:45:16.742082 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/extract-utilities/0.log" Dec 01 10:45:16 crc kubenswrapper[4933]: I1201 10:45:16.815005 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/extract-utilities/0.log" Dec 01 10:45:16 crc kubenswrapper[4933]: I1201 10:45:16.905691 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cvjll_879fdd94-31c3-4e2b-b47e-291738616c68/registry-server/0.log" Dec 01 10:45:16 crc kubenswrapper[4933]: I1201 10:45:16.967189 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/extract-utilities/0.log" Dec 01 10:45:17 crc kubenswrapper[4933]: I1201 10:45:17.003472 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/extract-content/0.log" Dec 01 10:45:17 crc kubenswrapper[4933]: I1201 10:45:17.073969 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/extract-content/0.log" Dec 01 10:45:17 crc kubenswrapper[4933]: I1201 10:45:17.278840 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/extract-utilities/0.log" Dec 01 10:45:17 crc kubenswrapper[4933]: I1201 10:45:17.312848 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/extract-content/0.log" Dec 01 10:45:17 crc kubenswrapper[4933]: I1201 10:45:17.614619 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4w7n9_56c63d65-3e36-4df8-9e8c-96a87d3a40d4/registry-server/0.log" Dec 01 10:46:00 crc kubenswrapper[4933]: I1201 10:46:00.631487 4933 scope.go:117] "RemoveContainer" containerID="d5603104d3d0719fd7d505e105343c9f19686a898bebe128d02b4f19c5e5f639" Dec 01 10:46:41 crc kubenswrapper[4933]: I1201 10:46:41.740913 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:46:41 crc kubenswrapper[4933]: I1201 10:46:41.742026 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:47:11 crc kubenswrapper[4933]: I1201 10:47:11.741259 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:47:11 crc kubenswrapper[4933]: I1201 10:47:11.742128 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:47:15 crc kubenswrapper[4933]: I1201 10:47:15.234746 4933 generic.go:334] "Generic (PLEG): container finished" podID="116ab7e1-9a57-4640-8e97-a3e2140b402b" containerID="b9eea1b8dbf7814e7da32a7572f078b8aaacb223958f41103e3d374e4856c56a" exitCode=0 Dec 01 10:47:15 crc kubenswrapper[4933]: I1201 10:47:15.234845 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smjwx/must-gather-qsc4x" event={"ID":"116ab7e1-9a57-4640-8e97-a3e2140b402b","Type":"ContainerDied","Data":"b9eea1b8dbf7814e7da32a7572f078b8aaacb223958f41103e3d374e4856c56a"} Dec 01 10:47:15 crc kubenswrapper[4933]: I1201 10:47:15.236978 4933 scope.go:117] "RemoveContainer" containerID="b9eea1b8dbf7814e7da32a7572f078b8aaacb223958f41103e3d374e4856c56a" Dec 01 10:47:15 crc kubenswrapper[4933]: I1201 10:47:15.477964 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-smjwx_must-gather-qsc4x_116ab7e1-9a57-4640-8e97-a3e2140b402b/gather/0.log" Dec 01 10:47:25 crc kubenswrapper[4933]: I1201 10:47:25.974598 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-smjwx/must-gather-qsc4x"] Dec 01 10:47:25 crc kubenswrapper[4933]: I1201 10:47:25.975812 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-smjwx/must-gather-qsc4x" podUID="116ab7e1-9a57-4640-8e97-a3e2140b402b" containerName="copy" containerID="cri-o://908aba6518121c1380ab970c27834f3f53682dcde6f80656ede298b599973068" gracePeriod=2 Dec 01 10:47:25 crc kubenswrapper[4933]: I1201 10:47:25.985438 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-smjwx/must-gather-qsc4x"] Dec 01 10:47:26 crc kubenswrapper[4933]: I1201 10:47:26.372958 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-smjwx_must-gather-qsc4x_116ab7e1-9a57-4640-8e97-a3e2140b402b/copy/0.log" Dec 01 10:47:26 crc kubenswrapper[4933]: I1201 10:47:26.375545 4933 generic.go:334] "Generic (PLEG): container finished" podID="116ab7e1-9a57-4640-8e97-a3e2140b402b" containerID="908aba6518121c1380ab970c27834f3f53682dcde6f80656ede298b599973068" exitCode=143 Dec 01 10:47:26 crc kubenswrapper[4933]: I1201 10:47:26.375615 4933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc4423ee430b82a0daf244e3a4197f6eea8887998421aea7241fc389b76c0e04" Dec 01 10:47:26 crc kubenswrapper[4933]: I1201 10:47:26.471496 4933 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-smjwx_must-gather-qsc4x_116ab7e1-9a57-4640-8e97-a3e2140b402b/copy/0.log" Dec 01 10:47:26 crc kubenswrapper[4933]: I1201 10:47:26.471982 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/must-gather-qsc4x" Dec 01 10:47:26 crc kubenswrapper[4933]: I1201 10:47:26.519124 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7xkf\" (UniqueName: \"kubernetes.io/projected/116ab7e1-9a57-4640-8e97-a3e2140b402b-kube-api-access-c7xkf\") pod \"116ab7e1-9a57-4640-8e97-a3e2140b402b\" (UID: \"116ab7e1-9a57-4640-8e97-a3e2140b402b\") " Dec 01 10:47:26 crc kubenswrapper[4933]: I1201 10:47:26.519196 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/116ab7e1-9a57-4640-8e97-a3e2140b402b-must-gather-output\") pod \"116ab7e1-9a57-4640-8e97-a3e2140b402b\" (UID: \"116ab7e1-9a57-4640-8e97-a3e2140b402b\") " Dec 01 10:47:26 crc kubenswrapper[4933]: I1201 10:47:26.526820 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/116ab7e1-9a57-4640-8e97-a3e2140b402b-kube-api-access-c7xkf" (OuterVolumeSpecName: "kube-api-access-c7xkf") pod "116ab7e1-9a57-4640-8e97-a3e2140b402b" (UID: "116ab7e1-9a57-4640-8e97-a3e2140b402b"). InnerVolumeSpecName "kube-api-access-c7xkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:47:26 crc kubenswrapper[4933]: I1201 10:47:26.621914 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7xkf\" (UniqueName: \"kubernetes.io/projected/116ab7e1-9a57-4640-8e97-a3e2140b402b-kube-api-access-c7xkf\") on node \"crc\" DevicePath \"\"" Dec 01 10:47:26 crc kubenswrapper[4933]: I1201 10:47:26.676814 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/116ab7e1-9a57-4640-8e97-a3e2140b402b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "116ab7e1-9a57-4640-8e97-a3e2140b402b" (UID: "116ab7e1-9a57-4640-8e97-a3e2140b402b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:47:26 crc kubenswrapper[4933]: I1201 10:47:26.724192 4933 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/116ab7e1-9a57-4640-8e97-a3e2140b402b-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 01 10:47:27 crc kubenswrapper[4933]: I1201 10:47:27.384712 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smjwx/must-gather-qsc4x" Dec 01 10:47:27 crc kubenswrapper[4933]: I1201 10:47:27.681143 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="116ab7e1-9a57-4640-8e97-a3e2140b402b" path="/var/lib/kubelet/pods/116ab7e1-9a57-4640-8e97-a3e2140b402b/volumes" Dec 01 10:47:41 crc kubenswrapper[4933]: I1201 10:47:41.740948 4933 patch_prober.go:28] interesting pod/machine-config-daemon-k4lcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 10:47:41 crc kubenswrapper[4933]: I1201 10:47:41.741862 4933 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 10:47:41 crc kubenswrapper[4933]: I1201 10:47:41.741933 4933 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" Dec 01 10:47:41 crc kubenswrapper[4933]: I1201 10:47:41.743358 4933 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb"} pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 10:47:41 crc kubenswrapper[4933]: I1201 10:47:41.743444 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerName="machine-config-daemon" containerID="cri-o://700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" gracePeriod=600 Dec 01 10:47:41 crc kubenswrapper[4933]: E1201 10:47:41.881313 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:47:42 crc kubenswrapper[4933]: I1201 10:47:42.542888 4933 generic.go:334] "Generic (PLEG): container finished" podID="31deca5a-8ffe-4967-b02f-98a2043ddb23" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" exitCode=0 Dec 01 10:47:42 crc kubenswrapper[4933]: I1201 10:47:42.543015 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" event={"ID":"31deca5a-8ffe-4967-b02f-98a2043ddb23","Type":"ContainerDied","Data":"700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb"} Dec 01 10:47:42 crc kubenswrapper[4933]: I1201 10:47:42.543180 4933 scope.go:117] "RemoveContainer" containerID="5980ab50a1619ce76cb421b80524ebb405bf0c56dae61e64a1b53628b706bcb0" Dec 01 10:47:42 crc kubenswrapper[4933]: I1201 10:47:42.544014 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:47:42 crc kubenswrapper[4933]: E1201 10:47:42.544357 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:47:55 crc kubenswrapper[4933]: I1201 10:47:55.668220 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:47:55 crc kubenswrapper[4933]: E1201 10:47:55.669245 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:48:00 crc kubenswrapper[4933]: I1201 10:48:00.768090 4933 scope.go:117] "RemoveContainer" containerID="71aef67b9f4ab45810d8bc531974135e99c95a3a5b042403fd6ee87cfa9eac64" Dec 01 10:48:01 crc kubenswrapper[4933]: I1201 10:48:01.088063 4933 scope.go:117] "RemoveContainer" containerID="908aba6518121c1380ab970c27834f3f53682dcde6f80656ede298b599973068" Dec 01 10:48:01 crc kubenswrapper[4933]: I1201 10:48:01.150988 4933 scope.go:117] "RemoveContainer" containerID="b9eea1b8dbf7814e7da32a7572f078b8aaacb223958f41103e3d374e4856c56a" Dec 01 10:48:09 crc kubenswrapper[4933]: I1201 10:48:09.674241 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:48:09 crc kubenswrapper[4933]: E1201 10:48:09.675002 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.216409 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t5ljc"] Dec 01 10:48:23 crc kubenswrapper[4933]: E1201 10:48:23.217269 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="116ab7e1-9a57-4640-8e97-a3e2140b402b" containerName="copy" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.217283 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="116ab7e1-9a57-4640-8e97-a3e2140b402b" containerName="copy" Dec 01 10:48:23 crc kubenswrapper[4933]: E1201 10:48:23.217325 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692be73e-1e02-4fff-9063-d373705eeaf2" containerName="collect-profiles" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.217334 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="692be73e-1e02-4fff-9063-d373705eeaf2" containerName="collect-profiles" Dec 01 10:48:23 crc kubenswrapper[4933]: E1201 10:48:23.217347 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f027136-6631-41cc-9f76-37f6cfcd6fac" containerName="extract-utilities" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.217354 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f027136-6631-41cc-9f76-37f6cfcd6fac" containerName="extract-utilities" Dec 01 10:48:23 crc kubenswrapper[4933]: E1201 10:48:23.217364 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f027136-6631-41cc-9f76-37f6cfcd6fac" containerName="registry-server" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.217370 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f027136-6631-41cc-9f76-37f6cfcd6fac" containerName="registry-server" Dec 01 10:48:23 crc kubenswrapper[4933]: E1201 10:48:23.217388 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="116ab7e1-9a57-4640-8e97-a3e2140b402b" containerName="gather" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.217393 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="116ab7e1-9a57-4640-8e97-a3e2140b402b" containerName="gather" Dec 01 10:48:23 crc kubenswrapper[4933]: E1201 10:48:23.217408 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f027136-6631-41cc-9f76-37f6cfcd6fac" containerName="extract-content" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.217417 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f027136-6631-41cc-9f76-37f6cfcd6fac" containerName="extract-content" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.217585 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="116ab7e1-9a57-4640-8e97-a3e2140b402b" containerName="copy" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.217613 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="116ab7e1-9a57-4640-8e97-a3e2140b402b" containerName="gather" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.217624 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="692be73e-1e02-4fff-9063-d373705eeaf2" containerName="collect-profiles" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.217633 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f027136-6631-41cc-9f76-37f6cfcd6fac" containerName="registry-server" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.219068 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.255694 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t5ljc"] Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.332018 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-catalog-content\") pod \"community-operators-t5ljc\" (UID: \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\") " pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.332138 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-utilities\") pod \"community-operators-t5ljc\" (UID: \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\") " pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.332212 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfbfw\" (UniqueName: \"kubernetes.io/projected/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-kube-api-access-vfbfw\") pod \"community-operators-t5ljc\" (UID: \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\") " pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.434732 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-catalog-content\") pod \"community-operators-t5ljc\" (UID: \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\") " pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.434872 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-utilities\") pod \"community-operators-t5ljc\" (UID: \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\") " pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.434965 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfbfw\" (UniqueName: \"kubernetes.io/projected/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-kube-api-access-vfbfw\") pod \"community-operators-t5ljc\" (UID: \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\") " pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.436263 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-catalog-content\") pod \"community-operators-t5ljc\" (UID: \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\") " pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.436618 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-utilities\") pod \"community-operators-t5ljc\" (UID: \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\") " pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.461380 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfbfw\" (UniqueName: \"kubernetes.io/projected/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-kube-api-access-vfbfw\") pod \"community-operators-t5ljc\" (UID: \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\") " pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.561298 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:23 crc kubenswrapper[4933]: I1201 10:48:23.668373 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:48:23 crc kubenswrapper[4933]: E1201 10:48:23.668844 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:48:24 crc kubenswrapper[4933]: I1201 10:48:24.098749 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t5ljc"] Dec 01 10:48:25 crc kubenswrapper[4933]: I1201 10:48:25.071831 4933 generic.go:334] "Generic (PLEG): container finished" podID="c6cf6da7-8a4a-4384-a8ef-86453c60beb6" containerID="cb72d9f6320a1d3e9a6296c2fc330881b8e984ffff35f876965a4a3fbc6bf4a5" exitCode=0 Dec 01 10:48:25 crc kubenswrapper[4933]: I1201 10:48:25.071945 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5ljc" event={"ID":"c6cf6da7-8a4a-4384-a8ef-86453c60beb6","Type":"ContainerDied","Data":"cb72d9f6320a1d3e9a6296c2fc330881b8e984ffff35f876965a4a3fbc6bf4a5"} Dec 01 10:48:25 crc kubenswrapper[4933]: I1201 10:48:25.072912 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5ljc" event={"ID":"c6cf6da7-8a4a-4384-a8ef-86453c60beb6","Type":"ContainerStarted","Data":"de8303be5f9d8630e809cbb21c0e563fce5f4d8f005e3301725acd916283f93e"} Dec 01 10:48:26 crc kubenswrapper[4933]: I1201 10:48:26.084629 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5ljc" event={"ID":"c6cf6da7-8a4a-4384-a8ef-86453c60beb6","Type":"ContainerStarted","Data":"a42401b9a1926b58029344b638cf88cf3630b60d49ab324e90b7b7d13ad3e2f8"} Dec 01 10:48:27 crc kubenswrapper[4933]: I1201 10:48:27.097412 4933 generic.go:334] "Generic (PLEG): container finished" podID="c6cf6da7-8a4a-4384-a8ef-86453c60beb6" containerID="a42401b9a1926b58029344b638cf88cf3630b60d49ab324e90b7b7d13ad3e2f8" exitCode=0 Dec 01 10:48:27 crc kubenswrapper[4933]: I1201 10:48:27.097784 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5ljc" event={"ID":"c6cf6da7-8a4a-4384-a8ef-86453c60beb6","Type":"ContainerDied","Data":"a42401b9a1926b58029344b638cf88cf3630b60d49ab324e90b7b7d13ad3e2f8"} Dec 01 10:48:28 crc kubenswrapper[4933]: I1201 10:48:28.119626 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5ljc" event={"ID":"c6cf6da7-8a4a-4384-a8ef-86453c60beb6","Type":"ContainerStarted","Data":"93ad3e18d7c6373d5c6d11b2267a5ec50a69be4fc2f6e177ffb9814f03ec518f"} Dec 01 10:48:28 crc kubenswrapper[4933]: I1201 10:48:28.147203 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t5ljc" podStartSLOduration=2.709223781 podStartE2EDuration="5.147178901s" podCreationTimestamp="2025-12-01 10:48:23 +0000 UTC" firstStartedPulling="2025-12-01 10:48:25.075158847 +0000 UTC m=+4595.716882462" lastFinishedPulling="2025-12-01 10:48:27.513113967 +0000 UTC m=+4598.154837582" observedRunningTime="2025-12-01 10:48:28.138516018 +0000 UTC m=+4598.780239653" watchObservedRunningTime="2025-12-01 10:48:28.147178901 +0000 UTC m=+4598.788902516" Dec 01 10:48:33 crc kubenswrapper[4933]: I1201 10:48:33.562144 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:33 crc kubenswrapper[4933]: I1201 10:48:33.564280 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:33 crc kubenswrapper[4933]: I1201 10:48:33.611942 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:34 crc kubenswrapper[4933]: I1201 10:48:34.231293 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:34 crc kubenswrapper[4933]: I1201 10:48:34.299661 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t5ljc"] Dec 01 10:48:36 crc kubenswrapper[4933]: I1201 10:48:36.210830 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t5ljc" podUID="c6cf6da7-8a4a-4384-a8ef-86453c60beb6" containerName="registry-server" containerID="cri-o://93ad3e18d7c6373d5c6d11b2267a5ec50a69be4fc2f6e177ffb9814f03ec518f" gracePeriod=2 Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.178239 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.224592 4933 generic.go:334] "Generic (PLEG): container finished" podID="c6cf6da7-8a4a-4384-a8ef-86453c60beb6" containerID="93ad3e18d7c6373d5c6d11b2267a5ec50a69be4fc2f6e177ffb9814f03ec518f" exitCode=0 Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.224666 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5ljc" event={"ID":"c6cf6da7-8a4a-4384-a8ef-86453c60beb6","Type":"ContainerDied","Data":"93ad3e18d7c6373d5c6d11b2267a5ec50a69be4fc2f6e177ffb9814f03ec518f"} Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.224748 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5ljc" event={"ID":"c6cf6da7-8a4a-4384-a8ef-86453c60beb6","Type":"ContainerDied","Data":"de8303be5f9d8630e809cbb21c0e563fce5f4d8f005e3301725acd916283f93e"} Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.224772 4933 scope.go:117] "RemoveContainer" containerID="93ad3e18d7c6373d5c6d11b2267a5ec50a69be4fc2f6e177ffb9814f03ec518f" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.224699 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5ljc" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.248077 4933 scope.go:117] "RemoveContainer" containerID="a42401b9a1926b58029344b638cf88cf3630b60d49ab324e90b7b7d13ad3e2f8" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.271264 4933 scope.go:117] "RemoveContainer" containerID="cb72d9f6320a1d3e9a6296c2fc330881b8e984ffff35f876965a4a3fbc6bf4a5" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.325072 4933 scope.go:117] "RemoveContainer" containerID="93ad3e18d7c6373d5c6d11b2267a5ec50a69be4fc2f6e177ffb9814f03ec518f" Dec 01 10:48:37 crc kubenswrapper[4933]: E1201 10:48:37.325637 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ad3e18d7c6373d5c6d11b2267a5ec50a69be4fc2f6e177ffb9814f03ec518f\": container with ID starting with 93ad3e18d7c6373d5c6d11b2267a5ec50a69be4fc2f6e177ffb9814f03ec518f not found: ID does not exist" containerID="93ad3e18d7c6373d5c6d11b2267a5ec50a69be4fc2f6e177ffb9814f03ec518f" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.325693 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ad3e18d7c6373d5c6d11b2267a5ec50a69be4fc2f6e177ffb9814f03ec518f"} err="failed to get container status \"93ad3e18d7c6373d5c6d11b2267a5ec50a69be4fc2f6e177ffb9814f03ec518f\": rpc error: code = NotFound desc = could not find container \"93ad3e18d7c6373d5c6d11b2267a5ec50a69be4fc2f6e177ffb9814f03ec518f\": container with ID starting with 93ad3e18d7c6373d5c6d11b2267a5ec50a69be4fc2f6e177ffb9814f03ec518f not found: ID does not exist" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.325724 4933 scope.go:117] "RemoveContainer" containerID="a42401b9a1926b58029344b638cf88cf3630b60d49ab324e90b7b7d13ad3e2f8" Dec 01 10:48:37 crc kubenswrapper[4933]: E1201 10:48:37.326214 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a42401b9a1926b58029344b638cf88cf3630b60d49ab324e90b7b7d13ad3e2f8\": container with ID starting with a42401b9a1926b58029344b638cf88cf3630b60d49ab324e90b7b7d13ad3e2f8 not found: ID does not exist" containerID="a42401b9a1926b58029344b638cf88cf3630b60d49ab324e90b7b7d13ad3e2f8" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.326248 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a42401b9a1926b58029344b638cf88cf3630b60d49ab324e90b7b7d13ad3e2f8"} err="failed to get container status \"a42401b9a1926b58029344b638cf88cf3630b60d49ab324e90b7b7d13ad3e2f8\": rpc error: code = NotFound desc = could not find container \"a42401b9a1926b58029344b638cf88cf3630b60d49ab324e90b7b7d13ad3e2f8\": container with ID starting with a42401b9a1926b58029344b638cf88cf3630b60d49ab324e90b7b7d13ad3e2f8 not found: ID does not exist" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.326273 4933 scope.go:117] "RemoveContainer" containerID="cb72d9f6320a1d3e9a6296c2fc330881b8e984ffff35f876965a4a3fbc6bf4a5" Dec 01 10:48:37 crc kubenswrapper[4933]: E1201 10:48:37.326819 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb72d9f6320a1d3e9a6296c2fc330881b8e984ffff35f876965a4a3fbc6bf4a5\": container with ID starting with cb72d9f6320a1d3e9a6296c2fc330881b8e984ffff35f876965a4a3fbc6bf4a5 not found: ID does not exist" containerID="cb72d9f6320a1d3e9a6296c2fc330881b8e984ffff35f876965a4a3fbc6bf4a5" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.326855 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb72d9f6320a1d3e9a6296c2fc330881b8e984ffff35f876965a4a3fbc6bf4a5"} err="failed to get container status \"cb72d9f6320a1d3e9a6296c2fc330881b8e984ffff35f876965a4a3fbc6bf4a5\": rpc error: code = NotFound desc = could not find container \"cb72d9f6320a1d3e9a6296c2fc330881b8e984ffff35f876965a4a3fbc6bf4a5\": container with ID starting with cb72d9f6320a1d3e9a6296c2fc330881b8e984ffff35f876965a4a3fbc6bf4a5 not found: ID does not exist" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.357985 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfbfw\" (UniqueName: \"kubernetes.io/projected/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-kube-api-access-vfbfw\") pod \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\" (UID: \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\") " Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.358066 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-utilities\") pod \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\" (UID: \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\") " Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.358163 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-catalog-content\") pod \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\" (UID: \"c6cf6da7-8a4a-4384-a8ef-86453c60beb6\") " Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.359112 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-utilities" (OuterVolumeSpecName: "utilities") pod "c6cf6da7-8a4a-4384-a8ef-86453c60beb6" (UID: "c6cf6da7-8a4a-4384-a8ef-86453c60beb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.363649 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-kube-api-access-vfbfw" (OuterVolumeSpecName: "kube-api-access-vfbfw") pod "c6cf6da7-8a4a-4384-a8ef-86453c60beb6" (UID: "c6cf6da7-8a4a-4384-a8ef-86453c60beb6"). InnerVolumeSpecName "kube-api-access-vfbfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.414712 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6cf6da7-8a4a-4384-a8ef-86453c60beb6" (UID: "c6cf6da7-8a4a-4384-a8ef-86453c60beb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.460759 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.460806 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfbfw\" (UniqueName: \"kubernetes.io/projected/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-kube-api-access-vfbfw\") on node \"crc\" DevicePath \"\"" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.460817 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6cf6da7-8a4a-4384-a8ef-86453c60beb6-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.568933 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t5ljc"] Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.582396 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t5ljc"] Dec 01 10:48:37 crc kubenswrapper[4933]: I1201 10:48:37.679157 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6cf6da7-8a4a-4384-a8ef-86453c60beb6" path="/var/lib/kubelet/pods/c6cf6da7-8a4a-4384-a8ef-86453c60beb6/volumes" Dec 01 10:48:38 crc kubenswrapper[4933]: I1201 10:48:38.668383 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:48:38 crc kubenswrapper[4933]: E1201 10:48:38.669283 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:48:53 crc kubenswrapper[4933]: I1201 10:48:53.668560 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:48:53 crc kubenswrapper[4933]: E1201 10:48:53.669745 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:49:06 crc kubenswrapper[4933]: I1201 10:49:06.667138 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:49:06 crc kubenswrapper[4933]: E1201 10:49:06.668190 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:49:18 crc kubenswrapper[4933]: I1201 10:49:18.667780 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:49:18 crc kubenswrapper[4933]: E1201 10:49:18.668765 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.002914 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tl9ts"] Dec 01 10:49:25 crc kubenswrapper[4933]: E1201 10:49:25.004720 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cf6da7-8a4a-4384-a8ef-86453c60beb6" containerName="extract-utilities" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.004745 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cf6da7-8a4a-4384-a8ef-86453c60beb6" containerName="extract-utilities" Dec 01 10:49:25 crc kubenswrapper[4933]: E1201 10:49:25.004773 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cf6da7-8a4a-4384-a8ef-86453c60beb6" containerName="registry-server" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.004781 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cf6da7-8a4a-4384-a8ef-86453c60beb6" containerName="registry-server" Dec 01 10:49:25 crc kubenswrapper[4933]: E1201 10:49:25.004794 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cf6da7-8a4a-4384-a8ef-86453c60beb6" containerName="extract-content" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.004802 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cf6da7-8a4a-4384-a8ef-86453c60beb6" containerName="extract-content" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.005046 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6cf6da7-8a4a-4384-a8ef-86453c60beb6" containerName="registry-server" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.007149 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.024514 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl9ts"] Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.103012 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1044496-024f-4fe4-a9ab-266b0507d37b-catalog-content\") pod \"redhat-marketplace-tl9ts\" (UID: \"a1044496-024f-4fe4-a9ab-266b0507d37b\") " pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.103261 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dk2f\" (UniqueName: \"kubernetes.io/projected/a1044496-024f-4fe4-a9ab-266b0507d37b-kube-api-access-7dk2f\") pod \"redhat-marketplace-tl9ts\" (UID: \"a1044496-024f-4fe4-a9ab-266b0507d37b\") " pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.103509 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1044496-024f-4fe4-a9ab-266b0507d37b-utilities\") pod \"redhat-marketplace-tl9ts\" (UID: \"a1044496-024f-4fe4-a9ab-266b0507d37b\") " pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.206038 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1044496-024f-4fe4-a9ab-266b0507d37b-utilities\") pod \"redhat-marketplace-tl9ts\" (UID: \"a1044496-024f-4fe4-a9ab-266b0507d37b\") " pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.206181 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1044496-024f-4fe4-a9ab-266b0507d37b-catalog-content\") pod \"redhat-marketplace-tl9ts\" (UID: \"a1044496-024f-4fe4-a9ab-266b0507d37b\") " pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.206364 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dk2f\" (UniqueName: \"kubernetes.io/projected/a1044496-024f-4fe4-a9ab-266b0507d37b-kube-api-access-7dk2f\") pod \"redhat-marketplace-tl9ts\" (UID: \"a1044496-024f-4fe4-a9ab-266b0507d37b\") " pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.206759 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1044496-024f-4fe4-a9ab-266b0507d37b-utilities\") pod \"redhat-marketplace-tl9ts\" (UID: \"a1044496-024f-4fe4-a9ab-266b0507d37b\") " pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.206830 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1044496-024f-4fe4-a9ab-266b0507d37b-catalog-content\") pod \"redhat-marketplace-tl9ts\" (UID: \"a1044496-024f-4fe4-a9ab-266b0507d37b\") " pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.238371 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dk2f\" (UniqueName: \"kubernetes.io/projected/a1044496-024f-4fe4-a9ab-266b0507d37b-kube-api-access-7dk2f\") pod \"redhat-marketplace-tl9ts\" (UID: \"a1044496-024f-4fe4-a9ab-266b0507d37b\") " pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.336047 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:25 crc kubenswrapper[4933]: I1201 10:49:25.866537 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl9ts"] Dec 01 10:49:26 crc kubenswrapper[4933]: I1201 10:49:26.759025 4933 generic.go:334] "Generic (PLEG): container finished" podID="a1044496-024f-4fe4-a9ab-266b0507d37b" containerID="1e3e4d6beb233d79f09539aa0efa03596821d4c510e13fa98badb1cc59803150" exitCode=0 Dec 01 10:49:26 crc kubenswrapper[4933]: I1201 10:49:26.759140 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl9ts" event={"ID":"a1044496-024f-4fe4-a9ab-266b0507d37b","Type":"ContainerDied","Data":"1e3e4d6beb233d79f09539aa0efa03596821d4c510e13fa98badb1cc59803150"} Dec 01 10:49:26 crc kubenswrapper[4933]: I1201 10:49:26.759357 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl9ts" event={"ID":"a1044496-024f-4fe4-a9ab-266b0507d37b","Type":"ContainerStarted","Data":"d42a661131f1b8e65e6f1f86e3f3e79da2f46ff536eb5f77596e189850a5c86b"} Dec 01 10:49:27 crc kubenswrapper[4933]: I1201 10:49:27.776178 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl9ts" event={"ID":"a1044496-024f-4fe4-a9ab-266b0507d37b","Type":"ContainerStarted","Data":"db38489e48d113bc93366ac4efa4eccc4ef55fa0f1ef2d707eb6ce19673ea1c5"} Dec 01 10:49:28 crc kubenswrapper[4933]: I1201 10:49:28.792244 4933 generic.go:334] "Generic (PLEG): container finished" podID="a1044496-024f-4fe4-a9ab-266b0507d37b" containerID="db38489e48d113bc93366ac4efa4eccc4ef55fa0f1ef2d707eb6ce19673ea1c5" exitCode=0 Dec 01 10:49:28 crc kubenswrapper[4933]: I1201 10:49:28.792422 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl9ts" event={"ID":"a1044496-024f-4fe4-a9ab-266b0507d37b","Type":"ContainerDied","Data":"db38489e48d113bc93366ac4efa4eccc4ef55fa0f1ef2d707eb6ce19673ea1c5"} Dec 01 10:49:29 crc kubenswrapper[4933]: I1201 10:49:29.806677 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl9ts" event={"ID":"a1044496-024f-4fe4-a9ab-266b0507d37b","Type":"ContainerStarted","Data":"acab1def4e37b09c0909213232a3a3bbaba7ce4cfd18eb96038c70be211e8ae7"} Dec 01 10:49:29 crc kubenswrapper[4933]: I1201 10:49:29.841573 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tl9ts" podStartSLOduration=3.384018082 podStartE2EDuration="5.841537083s" podCreationTimestamp="2025-12-01 10:49:24 +0000 UTC" firstStartedPulling="2025-12-01 10:49:26.762951548 +0000 UTC m=+4657.404675163" lastFinishedPulling="2025-12-01 10:49:29.220470549 +0000 UTC m=+4659.862194164" observedRunningTime="2025-12-01 10:49:29.828543273 +0000 UTC m=+4660.470266908" watchObservedRunningTime="2025-12-01 10:49:29.841537083 +0000 UTC m=+4660.483260698" Dec 01 10:49:30 crc kubenswrapper[4933]: I1201 10:49:30.668532 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:49:30 crc kubenswrapper[4933]: E1201 10:49:30.668991 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:49:35 crc kubenswrapper[4933]: I1201 10:49:35.337375 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:35 crc kubenswrapper[4933]: I1201 10:49:35.337920 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:35 crc kubenswrapper[4933]: I1201 10:49:35.389817 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:35 crc kubenswrapper[4933]: I1201 10:49:35.909490 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:36 crc kubenswrapper[4933]: I1201 10:49:36.564662 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl9ts"] Dec 01 10:49:37 crc kubenswrapper[4933]: I1201 10:49:37.882742 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tl9ts" podUID="a1044496-024f-4fe4-a9ab-266b0507d37b" containerName="registry-server" containerID="cri-o://acab1def4e37b09c0909213232a3a3bbaba7ce4cfd18eb96038c70be211e8ae7" gracePeriod=2 Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.675149 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.827558 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1044496-024f-4fe4-a9ab-266b0507d37b-catalog-content\") pod \"a1044496-024f-4fe4-a9ab-266b0507d37b\" (UID: \"a1044496-024f-4fe4-a9ab-266b0507d37b\") " Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.827897 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1044496-024f-4fe4-a9ab-266b0507d37b-utilities\") pod \"a1044496-024f-4fe4-a9ab-266b0507d37b\" (UID: \"a1044496-024f-4fe4-a9ab-266b0507d37b\") " Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.828069 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dk2f\" (UniqueName: \"kubernetes.io/projected/a1044496-024f-4fe4-a9ab-266b0507d37b-kube-api-access-7dk2f\") pod \"a1044496-024f-4fe4-a9ab-266b0507d37b\" (UID: \"a1044496-024f-4fe4-a9ab-266b0507d37b\") " Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.829019 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1044496-024f-4fe4-a9ab-266b0507d37b-utilities" (OuterVolumeSpecName: "utilities") pod "a1044496-024f-4fe4-a9ab-266b0507d37b" (UID: "a1044496-024f-4fe4-a9ab-266b0507d37b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.835589 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1044496-024f-4fe4-a9ab-266b0507d37b-kube-api-access-7dk2f" (OuterVolumeSpecName: "kube-api-access-7dk2f") pod "a1044496-024f-4fe4-a9ab-266b0507d37b" (UID: "a1044496-024f-4fe4-a9ab-266b0507d37b"). InnerVolumeSpecName "kube-api-access-7dk2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.849219 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1044496-024f-4fe4-a9ab-266b0507d37b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1044496-024f-4fe4-a9ab-266b0507d37b" (UID: "a1044496-024f-4fe4-a9ab-266b0507d37b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.907117 4933 generic.go:334] "Generic (PLEG): container finished" podID="a1044496-024f-4fe4-a9ab-266b0507d37b" containerID="acab1def4e37b09c0909213232a3a3bbaba7ce4cfd18eb96038c70be211e8ae7" exitCode=0 Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.907184 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl9ts" Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.907194 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl9ts" event={"ID":"a1044496-024f-4fe4-a9ab-266b0507d37b","Type":"ContainerDied","Data":"acab1def4e37b09c0909213232a3a3bbaba7ce4cfd18eb96038c70be211e8ae7"} Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.907340 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl9ts" event={"ID":"a1044496-024f-4fe4-a9ab-266b0507d37b","Type":"ContainerDied","Data":"d42a661131f1b8e65e6f1f86e3f3e79da2f46ff536eb5f77596e189850a5c86b"} Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.907377 4933 scope.go:117] "RemoveContainer" containerID="acab1def4e37b09c0909213232a3a3bbaba7ce4cfd18eb96038c70be211e8ae7" Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.930991 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1044496-024f-4fe4-a9ab-266b0507d37b-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.931034 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dk2f\" (UniqueName: \"kubernetes.io/projected/a1044496-024f-4fe4-a9ab-266b0507d37b-kube-api-access-7dk2f\") on node \"crc\" DevicePath \"\"" Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.931047 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1044496-024f-4fe4-a9ab-266b0507d37b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.983489 4933 scope.go:117] "RemoveContainer" containerID="db38489e48d113bc93366ac4efa4eccc4ef55fa0f1ef2d707eb6ce19673ea1c5" Dec 01 10:49:38 crc kubenswrapper[4933]: I1201 10:49:38.988521 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl9ts"] Dec 01 10:49:39 crc kubenswrapper[4933]: I1201 10:49:39.008515 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl9ts"] Dec 01 10:49:39 crc kubenswrapper[4933]: I1201 10:49:39.027498 4933 scope.go:117] "RemoveContainer" containerID="1e3e4d6beb233d79f09539aa0efa03596821d4c510e13fa98badb1cc59803150" Dec 01 10:49:39 crc kubenswrapper[4933]: I1201 10:49:39.099155 4933 scope.go:117] "RemoveContainer" containerID="acab1def4e37b09c0909213232a3a3bbaba7ce4cfd18eb96038c70be211e8ae7" Dec 01 10:49:39 crc kubenswrapper[4933]: E1201 10:49:39.099802 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acab1def4e37b09c0909213232a3a3bbaba7ce4cfd18eb96038c70be211e8ae7\": container with ID starting with acab1def4e37b09c0909213232a3a3bbaba7ce4cfd18eb96038c70be211e8ae7 not found: ID does not exist" containerID="acab1def4e37b09c0909213232a3a3bbaba7ce4cfd18eb96038c70be211e8ae7" Dec 01 10:49:39 crc kubenswrapper[4933]: I1201 10:49:39.099841 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acab1def4e37b09c0909213232a3a3bbaba7ce4cfd18eb96038c70be211e8ae7"} err="failed to get container status \"acab1def4e37b09c0909213232a3a3bbaba7ce4cfd18eb96038c70be211e8ae7\": rpc error: code = NotFound desc = could not find container \"acab1def4e37b09c0909213232a3a3bbaba7ce4cfd18eb96038c70be211e8ae7\": container with ID starting with acab1def4e37b09c0909213232a3a3bbaba7ce4cfd18eb96038c70be211e8ae7 not found: ID does not exist" Dec 01 10:49:39 crc kubenswrapper[4933]: I1201 10:49:39.099869 4933 scope.go:117] "RemoveContainer" containerID="db38489e48d113bc93366ac4efa4eccc4ef55fa0f1ef2d707eb6ce19673ea1c5" Dec 01 10:49:39 crc kubenswrapper[4933]: E1201 10:49:39.100063 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db38489e48d113bc93366ac4efa4eccc4ef55fa0f1ef2d707eb6ce19673ea1c5\": container with ID starting with db38489e48d113bc93366ac4efa4eccc4ef55fa0f1ef2d707eb6ce19673ea1c5 not found: ID does not exist" containerID="db38489e48d113bc93366ac4efa4eccc4ef55fa0f1ef2d707eb6ce19673ea1c5" Dec 01 10:49:39 crc kubenswrapper[4933]: I1201 10:49:39.100102 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db38489e48d113bc93366ac4efa4eccc4ef55fa0f1ef2d707eb6ce19673ea1c5"} err="failed to get container status \"db38489e48d113bc93366ac4efa4eccc4ef55fa0f1ef2d707eb6ce19673ea1c5\": rpc error: code = NotFound desc = could not find container \"db38489e48d113bc93366ac4efa4eccc4ef55fa0f1ef2d707eb6ce19673ea1c5\": container with ID starting with db38489e48d113bc93366ac4efa4eccc4ef55fa0f1ef2d707eb6ce19673ea1c5 not found: ID does not exist" Dec 01 10:49:39 crc kubenswrapper[4933]: I1201 10:49:39.100122 4933 scope.go:117] "RemoveContainer" containerID="1e3e4d6beb233d79f09539aa0efa03596821d4c510e13fa98badb1cc59803150" Dec 01 10:49:39 crc kubenswrapper[4933]: E1201 10:49:39.100377 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e3e4d6beb233d79f09539aa0efa03596821d4c510e13fa98badb1cc59803150\": container with ID starting with 1e3e4d6beb233d79f09539aa0efa03596821d4c510e13fa98badb1cc59803150 not found: ID does not exist" containerID="1e3e4d6beb233d79f09539aa0efa03596821d4c510e13fa98badb1cc59803150" Dec 01 10:49:39 crc kubenswrapper[4933]: I1201 10:49:39.100402 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3e4d6beb233d79f09539aa0efa03596821d4c510e13fa98badb1cc59803150"} err="failed to get container status \"1e3e4d6beb233d79f09539aa0efa03596821d4c510e13fa98badb1cc59803150\": rpc error: code = NotFound desc = could not find container \"1e3e4d6beb233d79f09539aa0efa03596821d4c510e13fa98badb1cc59803150\": container with ID starting with 1e3e4d6beb233d79f09539aa0efa03596821d4c510e13fa98badb1cc59803150 not found: ID does not exist" Dec 01 10:49:39 crc kubenswrapper[4933]: I1201 10:49:39.681194 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1044496-024f-4fe4-a9ab-266b0507d37b" path="/var/lib/kubelet/pods/a1044496-024f-4fe4-a9ab-266b0507d37b/volumes" Dec 01 10:49:44 crc kubenswrapper[4933]: I1201 10:49:44.668428 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:49:44 crc kubenswrapper[4933]: E1201 10:49:44.669193 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:49:57 crc kubenswrapper[4933]: I1201 10:49:57.668146 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:49:57 crc kubenswrapper[4933]: E1201 10:49:57.669460 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.351025 4933 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sqpmx"] Dec 01 10:50:06 crc kubenswrapper[4933]: E1201 10:50:06.352803 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1044496-024f-4fe4-a9ab-266b0507d37b" containerName="registry-server" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.352822 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1044496-024f-4fe4-a9ab-266b0507d37b" containerName="registry-server" Dec 01 10:50:06 crc kubenswrapper[4933]: E1201 10:50:06.352832 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1044496-024f-4fe4-a9ab-266b0507d37b" containerName="extract-content" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.352839 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1044496-024f-4fe4-a9ab-266b0507d37b" containerName="extract-content" Dec 01 10:50:06 crc kubenswrapper[4933]: E1201 10:50:06.352859 4933 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1044496-024f-4fe4-a9ab-266b0507d37b" containerName="extract-utilities" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.352865 4933 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1044496-024f-4fe4-a9ab-266b0507d37b" containerName="extract-utilities" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.353099 4933 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1044496-024f-4fe4-a9ab-266b0507d37b" containerName="registry-server" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.354877 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.399151 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqpmx"] Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.483573 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a35d4c6-f561-4254-a019-244bddd349b1-utilities\") pod \"certified-operators-sqpmx\" (UID: \"4a35d4c6-f561-4254-a019-244bddd349b1\") " pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.483777 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ljd4\" (UniqueName: \"kubernetes.io/projected/4a35d4c6-f561-4254-a019-244bddd349b1-kube-api-access-7ljd4\") pod \"certified-operators-sqpmx\" (UID: \"4a35d4c6-f561-4254-a019-244bddd349b1\") " pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.484030 4933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a35d4c6-f561-4254-a019-244bddd349b1-catalog-content\") pod \"certified-operators-sqpmx\" (UID: \"4a35d4c6-f561-4254-a019-244bddd349b1\") " pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.586364 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a35d4c6-f561-4254-a019-244bddd349b1-catalog-content\") pod \"certified-operators-sqpmx\" (UID: \"4a35d4c6-f561-4254-a019-244bddd349b1\") " pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.586530 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a35d4c6-f561-4254-a019-244bddd349b1-utilities\") pod \"certified-operators-sqpmx\" (UID: \"4a35d4c6-f561-4254-a019-244bddd349b1\") " pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.586602 4933 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ljd4\" (UniqueName: \"kubernetes.io/projected/4a35d4c6-f561-4254-a019-244bddd349b1-kube-api-access-7ljd4\") pod \"certified-operators-sqpmx\" (UID: \"4a35d4c6-f561-4254-a019-244bddd349b1\") " pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.587108 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a35d4c6-f561-4254-a019-244bddd349b1-catalog-content\") pod \"certified-operators-sqpmx\" (UID: \"4a35d4c6-f561-4254-a019-244bddd349b1\") " pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.587179 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a35d4c6-f561-4254-a019-244bddd349b1-utilities\") pod \"certified-operators-sqpmx\" (UID: \"4a35d4c6-f561-4254-a019-244bddd349b1\") " pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.616443 4933 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ljd4\" (UniqueName: \"kubernetes.io/projected/4a35d4c6-f561-4254-a019-244bddd349b1-kube-api-access-7ljd4\") pod \"certified-operators-sqpmx\" (UID: \"4a35d4c6-f561-4254-a019-244bddd349b1\") " pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:06 crc kubenswrapper[4933]: I1201 10:50:06.678428 4933 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:07 crc kubenswrapper[4933]: I1201 10:50:07.335427 4933 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqpmx"] Dec 01 10:50:08 crc kubenswrapper[4933]: I1201 10:50:08.201048 4933 generic.go:334] "Generic (PLEG): container finished" podID="4a35d4c6-f561-4254-a019-244bddd349b1" containerID="856432131a6333f11484ac9f1114145751c46b450a80f76f3d9b9914a4de5012" exitCode=0 Dec 01 10:50:08 crc kubenswrapper[4933]: I1201 10:50:08.201172 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqpmx" event={"ID":"4a35d4c6-f561-4254-a019-244bddd349b1","Type":"ContainerDied","Data":"856432131a6333f11484ac9f1114145751c46b450a80f76f3d9b9914a4de5012"} Dec 01 10:50:08 crc kubenswrapper[4933]: I1201 10:50:08.202010 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqpmx" event={"ID":"4a35d4c6-f561-4254-a019-244bddd349b1","Type":"ContainerStarted","Data":"b07e097f6cdccd360574543583c96751f22733430467fe1ce10e9ff8f85b8035"} Dec 01 10:50:08 crc kubenswrapper[4933]: I1201 10:50:08.208100 4933 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 10:50:10 crc kubenswrapper[4933]: I1201 10:50:10.227127 4933 generic.go:334] "Generic (PLEG): container finished" podID="4a35d4c6-f561-4254-a019-244bddd349b1" containerID="68381c9c0c549be202cd8a531dcd9069f6a3107aae573feb392c772a66a9e4f8" exitCode=0 Dec 01 10:50:10 crc kubenswrapper[4933]: I1201 10:50:10.227212 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqpmx" event={"ID":"4a35d4c6-f561-4254-a019-244bddd349b1","Type":"ContainerDied","Data":"68381c9c0c549be202cd8a531dcd9069f6a3107aae573feb392c772a66a9e4f8"} Dec 01 10:50:11 crc kubenswrapper[4933]: I1201 10:50:11.240400 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqpmx" event={"ID":"4a35d4c6-f561-4254-a019-244bddd349b1","Type":"ContainerStarted","Data":"366de21e644319c4a8bca3b4296759df6d6ce792d2f0370283d54c07c7ee2220"} Dec 01 10:50:11 crc kubenswrapper[4933]: I1201 10:50:11.266902 4933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sqpmx" podStartSLOduration=2.540167843 podStartE2EDuration="5.266869344s" podCreationTimestamp="2025-12-01 10:50:06 +0000 UTC" firstStartedPulling="2025-12-01 10:50:08.207761337 +0000 UTC m=+4698.849484952" lastFinishedPulling="2025-12-01 10:50:10.934462838 +0000 UTC m=+4701.576186453" observedRunningTime="2025-12-01 10:50:11.260584879 +0000 UTC m=+4701.902308504" watchObservedRunningTime="2025-12-01 10:50:11.266869344 +0000 UTC m=+4701.908592959" Dec 01 10:50:11 crc kubenswrapper[4933]: I1201 10:50:11.667900 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:50:11 crc kubenswrapper[4933]: E1201 10:50:11.668292 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:50:16 crc kubenswrapper[4933]: I1201 10:50:16.679353 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:16 crc kubenswrapper[4933]: I1201 10:50:16.679976 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:16 crc kubenswrapper[4933]: I1201 10:50:16.733822 4933 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:17 crc kubenswrapper[4933]: I1201 10:50:17.349937 4933 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:17 crc kubenswrapper[4933]: I1201 10:50:17.406268 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sqpmx"] Dec 01 10:50:19 crc kubenswrapper[4933]: I1201 10:50:19.325796 4933 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sqpmx" podUID="4a35d4c6-f561-4254-a019-244bddd349b1" containerName="registry-server" containerID="cri-o://366de21e644319c4a8bca3b4296759df6d6ce792d2f0370283d54c07c7ee2220" gracePeriod=2 Dec 01 10:50:19 crc kubenswrapper[4933]: I1201 10:50:19.988673 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.155376 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a35d4c6-f561-4254-a019-244bddd349b1-catalog-content\") pod \"4a35d4c6-f561-4254-a019-244bddd349b1\" (UID: \"4a35d4c6-f561-4254-a019-244bddd349b1\") " Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.155662 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ljd4\" (UniqueName: \"kubernetes.io/projected/4a35d4c6-f561-4254-a019-244bddd349b1-kube-api-access-7ljd4\") pod \"4a35d4c6-f561-4254-a019-244bddd349b1\" (UID: \"4a35d4c6-f561-4254-a019-244bddd349b1\") " Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.155697 4933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a35d4c6-f561-4254-a019-244bddd349b1-utilities\") pod \"4a35d4c6-f561-4254-a019-244bddd349b1\" (UID: \"4a35d4c6-f561-4254-a019-244bddd349b1\") " Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.156874 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a35d4c6-f561-4254-a019-244bddd349b1-utilities" (OuterVolumeSpecName: "utilities") pod "4a35d4c6-f561-4254-a019-244bddd349b1" (UID: "4a35d4c6-f561-4254-a019-244bddd349b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.164597 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a35d4c6-f561-4254-a019-244bddd349b1-kube-api-access-7ljd4" (OuterVolumeSpecName: "kube-api-access-7ljd4") pod "4a35d4c6-f561-4254-a019-244bddd349b1" (UID: "4a35d4c6-f561-4254-a019-244bddd349b1"). InnerVolumeSpecName "kube-api-access-7ljd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.208306 4933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a35d4c6-f561-4254-a019-244bddd349b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a35d4c6-f561-4254-a019-244bddd349b1" (UID: "4a35d4c6-f561-4254-a019-244bddd349b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.258930 4933 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a35d4c6-f561-4254-a019-244bddd349b1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.258992 4933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ljd4\" (UniqueName: \"kubernetes.io/projected/4a35d4c6-f561-4254-a019-244bddd349b1-kube-api-access-7ljd4\") on node \"crc\" DevicePath \"\"" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.259010 4933 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a35d4c6-f561-4254-a019-244bddd349b1-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.337641 4933 generic.go:334] "Generic (PLEG): container finished" podID="4a35d4c6-f561-4254-a019-244bddd349b1" containerID="366de21e644319c4a8bca3b4296759df6d6ce792d2f0370283d54c07c7ee2220" exitCode=0 Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.337699 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqpmx" event={"ID":"4a35d4c6-f561-4254-a019-244bddd349b1","Type":"ContainerDied","Data":"366de21e644319c4a8bca3b4296759df6d6ce792d2f0370283d54c07c7ee2220"} Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.337737 4933 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqpmx" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.337755 4933 scope.go:117] "RemoveContainer" containerID="366de21e644319c4a8bca3b4296759df6d6ce792d2f0370283d54c07c7ee2220" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.337741 4933 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqpmx" event={"ID":"4a35d4c6-f561-4254-a019-244bddd349b1","Type":"ContainerDied","Data":"b07e097f6cdccd360574543583c96751f22733430467fe1ce10e9ff8f85b8035"} Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.359359 4933 scope.go:117] "RemoveContainer" containerID="68381c9c0c549be202cd8a531dcd9069f6a3107aae573feb392c772a66a9e4f8" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.380205 4933 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sqpmx"] Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.389553 4933 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sqpmx"] Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.398652 4933 scope.go:117] "RemoveContainer" containerID="856432131a6333f11484ac9f1114145751c46b450a80f76f3d9b9914a4de5012" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.447529 4933 scope.go:117] "RemoveContainer" containerID="366de21e644319c4a8bca3b4296759df6d6ce792d2f0370283d54c07c7ee2220" Dec 01 10:50:20 crc kubenswrapper[4933]: E1201 10:50:20.448242 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"366de21e644319c4a8bca3b4296759df6d6ce792d2f0370283d54c07c7ee2220\": container with ID starting with 366de21e644319c4a8bca3b4296759df6d6ce792d2f0370283d54c07c7ee2220 not found: ID does not exist" containerID="366de21e644319c4a8bca3b4296759df6d6ce792d2f0370283d54c07c7ee2220" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.448304 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366de21e644319c4a8bca3b4296759df6d6ce792d2f0370283d54c07c7ee2220"} err="failed to get container status \"366de21e644319c4a8bca3b4296759df6d6ce792d2f0370283d54c07c7ee2220\": rpc error: code = NotFound desc = could not find container \"366de21e644319c4a8bca3b4296759df6d6ce792d2f0370283d54c07c7ee2220\": container with ID starting with 366de21e644319c4a8bca3b4296759df6d6ce792d2f0370283d54c07c7ee2220 not found: ID does not exist" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.448361 4933 scope.go:117] "RemoveContainer" containerID="68381c9c0c549be202cd8a531dcd9069f6a3107aae573feb392c772a66a9e4f8" Dec 01 10:50:20 crc kubenswrapper[4933]: E1201 10:50:20.448898 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68381c9c0c549be202cd8a531dcd9069f6a3107aae573feb392c772a66a9e4f8\": container with ID starting with 68381c9c0c549be202cd8a531dcd9069f6a3107aae573feb392c772a66a9e4f8 not found: ID does not exist" containerID="68381c9c0c549be202cd8a531dcd9069f6a3107aae573feb392c772a66a9e4f8" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.448936 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68381c9c0c549be202cd8a531dcd9069f6a3107aae573feb392c772a66a9e4f8"} err="failed to get container status \"68381c9c0c549be202cd8a531dcd9069f6a3107aae573feb392c772a66a9e4f8\": rpc error: code = NotFound desc = could not find container \"68381c9c0c549be202cd8a531dcd9069f6a3107aae573feb392c772a66a9e4f8\": container with ID starting with 68381c9c0c549be202cd8a531dcd9069f6a3107aae573feb392c772a66a9e4f8 not found: ID does not exist" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.448959 4933 scope.go:117] "RemoveContainer" containerID="856432131a6333f11484ac9f1114145751c46b450a80f76f3d9b9914a4de5012" Dec 01 10:50:20 crc kubenswrapper[4933]: E1201 10:50:20.449294 4933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856432131a6333f11484ac9f1114145751c46b450a80f76f3d9b9914a4de5012\": container with ID starting with 856432131a6333f11484ac9f1114145751c46b450a80f76f3d9b9914a4de5012 not found: ID does not exist" containerID="856432131a6333f11484ac9f1114145751c46b450a80f76f3d9b9914a4de5012" Dec 01 10:50:20 crc kubenswrapper[4933]: I1201 10:50:20.449382 4933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856432131a6333f11484ac9f1114145751c46b450a80f76f3d9b9914a4de5012"} err="failed to get container status \"856432131a6333f11484ac9f1114145751c46b450a80f76f3d9b9914a4de5012\": rpc error: code = NotFound desc = could not find container \"856432131a6333f11484ac9f1114145751c46b450a80f76f3d9b9914a4de5012\": container with ID starting with 856432131a6333f11484ac9f1114145751c46b450a80f76f3d9b9914a4de5012 not found: ID does not exist" Dec 01 10:50:21 crc kubenswrapper[4933]: I1201 10:50:21.678553 4933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a35d4c6-f561-4254-a019-244bddd349b1" path="/var/lib/kubelet/pods/4a35d4c6-f561-4254-a019-244bddd349b1/volumes" Dec 01 10:50:25 crc kubenswrapper[4933]: I1201 10:50:25.669081 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:50:25 crc kubenswrapper[4933]: E1201 10:50:25.671228 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:50:40 crc kubenswrapper[4933]: I1201 10:50:40.667504 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:50:40 crc kubenswrapper[4933]: E1201 10:50:40.668437 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23" Dec 01 10:50:52 crc kubenswrapper[4933]: I1201 10:50:52.667797 4933 scope.go:117] "RemoveContainer" containerID="700470a90e5bac216ac65c0c1909e7b5b54ddd8643e5e759fd5d0fe0b7af56cb" Dec 01 10:50:52 crc kubenswrapper[4933]: E1201 10:50:52.669059 4933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4lcd_openshift-machine-config-operator(31deca5a-8ffe-4967-b02f-98a2043ddb23)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4lcd" podUID="31deca5a-8ffe-4967-b02f-98a2043ddb23"